Sample records for size distribution calculated

  1. Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.

    PubMed

    Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T

    2007-03-30

    Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.

  2. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  3. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  4. Effect of particle size distribution on the separation efficiency in liquid chromatography.

    PubMed

    Horváth, Krisztián; Lukács, Diána; Sepsey, Annamária; Felinger, Attila

    2014-09-26

    In this work, the influence of the width of particle size distribution (PSD) on chromatographic efficiency is studied. The PSD is described by lognormal distribution. A theoretical framework is developed in order to calculate heights equivalent to a theoretical plate in case of different PSDs. Our calculations demonstrate and verify that wide particle size distributions have significant effect on the separation efficiency of molecules. The differences of fully porous and core-shell phases regarding the influence of width of PSD are presented and discussed. The efficiencies of bimodal phases were also calculated. The results showed that these packings do not have any advantage over unimodal phases. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The temperature dependence of inelastic light scattering from small particles for use in combustion diagnostic instrumentation

    NASA Technical Reports Server (NTRS)

    Cloud, Stanley D.

    1987-01-01

    A computer calculation of the expected angular distribution of coherent anti-Stokes Raman scattering (CARS) from micrometer size polystyrene spheres based on a Mie-type model, and a pilot experiment to test the feasibility of measuring CARS angular distributions from micrometer size polystyrene spheres by simply suspending them in water are discussed. The computer calculations predict a very interesting structure in the angular distributions that depends strongly on the size and relative refractive index of the spheres.

  6. Methods for estimating 2D cloud size distributions from 1D observations

    DOE PAGES

    Romps, David M.; Vogelmann, Andrew M.

    2017-08-04

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  7. Methods for estimating 2D cloud size distributions from 1D observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David M.; Vogelmann, Andrew M.

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  8. Quality of the log-geometric distribution extrapolation for smaller undiscovered oil and gas pool size

    USGS Publications Warehouse

    Chenglin, L.; Charpentier, R.R.

    2010-01-01

    The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.

  9. Three-phase boundary length in solid-oxide fuel cells: A mathematical model

    NASA Astrophysics Data System (ADS)

    Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf

    A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.

  10. COAGULATION CALCULATIONS OF ICY PLANET FORMATION AT 15-150 AU: A CORRELATION BETWEEN THE MAXIMUM RADIUS AND THE SLOPE OF THE SIZE DISTRIBUTION FOR TRANS-NEPTUNIAN OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu

    2012-03-15

    We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less

  11. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  12. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  13. Appendix B: Summary of TEM Particle Size Distribution Datasets

    EPA Pesticide Factsheets

    As discussed in the main text (see Section 5.3.2), calculation of the concentration of asbestos fibers in each of the bins of potential interest requires particle size distribution data derived using transmission electron microscopy (TEM).

  14. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam

    2015-09-01

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  15. Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders

    2013-09-07

    The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.

  16. Modeling of mineral dust in the atmosphere: Sources, transport, and optical thickness

    NASA Technical Reports Server (NTRS)

    Tegen, Ina; Fung, Inez

    1994-01-01

    A global three-dimensional model of the atmospheric mineral dust cycle is developed for the study of its impact on the radiative balance of the atmosphere. The model includes four size classes of minearl dust, whose source distributions are based on the distributions of vegetation, soil texture and soil moisture. Uplift and deposition are parameterized using analyzed winds and rainfall statistics that resolve high-frequency events. Dust transport in the atmosphere is simulated with the tracer transport model of the Goddard Institute for Space Studies. The simulated seasonal variations of dust concentrations show general reasonable agreement with the observed distributions, as do the size distributions at several observing sites. The discrepancies between the simulated and the observed dust concentrations point to regions of significant land surface modification. Monthly distribution of aerosol optical depths are calculated from the distribution of dust particle sizes. The maximum optical depth due to dust is 0.4-0.5 in the seasonal mean. The main uncertainties, about a factor of 3-5, in calculating optical thicknesses arise from the crude resolution of soil particle sizes, from insufficient constraint by the total dust loading in the atmosphere, and from our ignorance about adhesion, agglomeration, uplift, and size distributions of fine dust particles (less than 1 micrometer).

  17. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  18. Sediment Particle Characterization for Acoustic Applications: Coarse Content, Size and Shape Distributions in a Shelly Sand/Mud Environment

    DTIC Science & Technology

    2009-03-31

    Distributions in a Shelly Sand/Mud Environment Anatoliy N. Ivakin M A Ivakin. Particle size and shape distributions 2 Goff et al . [3] came to a...site, 37.0=P and 65.2=sρ g /cm 3 [19], were used for calculations. The sediment volume for calculations was taken to be 1885 cm 3 for each of the...typical values used for densities of quartz (sand) particles and calcium carbonate (shell) particles were taken to be 2.65 g /cm 3 and 2.75 g /cm 3

  19. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    NASA Astrophysics Data System (ADS)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-10-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  20. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  1. Verification of the grid size and angular increment effects in lung stereotactic body radiation therapy using the dynamic conformal arc technique

    NASA Astrophysics Data System (ADS)

    Park, Hae-Jin; Suh, Tae-Suk; Park, Ji-Yeon; Lee, Jeong-Woo; Kim, Mi-Hwa; Oh, Young-Taek; Chun, Mison; Noh, O. Kyu; Suh, Susie

    2013-06-01

    The dosimetric effects of variable grid size and angular increment were systematically evaluated in the measured dose distributions of dynamic conformal arc therapy (DCAT) for lung stereotactic body radiation therapy (SBRT). Dose variations with different grid sizes (2, 3, and 4 mm) and angular increments (2, 4, 6, and 10°) for spherical planning target volumes (PTVs) were verified in a thorax phantom by using EBT2 films. Although the doses for identical PTVs were predicted for the different grid sizes, the dose discrepancy was evaluated using one measured dose distribution with the gamma tool because the beam was delivered in the same set-up for DCAT. The dosimetric effect of the angular increment was verified by comparing the measured dose area histograms of organs at risk (OARs) at each angular increment. When the difference in the OAR doses is higher than the uncertainty of the film dosimetry, the error is regarded as the angular increment effect in discretely calculated doses. In the results, even when a 2-mm grid size was used with an elaborate dose calculation, 4-mm grid size led to a higher gamma pass ratio due to underdosage, a steep-dose descent gradient, and lower estimated PTV doses caused by the smoothing effect in the calculated dose distribution. An undulating dose distribution and a difference in the maximum contralateral lung dose of up to 14% were observed in dose calculation using a 10° angular increment. The DCAT can be effectively applied for an approximately spherical PTV in a relatively uniform geometry, which is less affected by inhomogeneous materials and differences in the beam path length.

  2. Remote sensing of floe size distribution and surface topography

    NASA Technical Reports Server (NTRS)

    Rothrock, D. A.; Thorndike, A. S.

    1984-01-01

    Floe size can be measured by several properties p- for instance, area or mean caliper diameter. Two definitions of floe size distribution seem particularly useful. F(p), the fraction of area covered by floes no smaller than p; and N(p), the number of floes per unit area no smaller than p. Several summertime distributions measured are a graph, their slopes range from -1.7 to -2.5. The variance of an estimate is also calculated.

  3. Characteristic fragment size distributions in dynamic fragmentation

    NASA Astrophysics Data System (ADS)

    Zhou, Fenghua; Molinari, Jean-François; Ramesh, K. T.

    2006-06-01

    The one-dimensional fragmentation of a dynamically expanding ring (Mott's problem) is studied numerically to obtain the fragment signatures under different strain rates. An empirical formula is proposed to calculate an average fragment size. Rayleigh distribution is found to describe the statistical properties of the fragment populations.

  4. The Seasonal Evolution of Sea Ice Floe Size Distribution

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. “The Seasonal Evolution of Sea Ice Floe Size Distribution... seasonally in the southern Beaufort and Chukchi Seas region. OBJECTIVES The objective of this work was to determine the seasonal evolution of the...summer melt season using (4). The technique allows for the direct observation of lateral melt and the 3 calculation of changes in floe perimeter, and

  5. A Quantitative Test of the Applicability of Independent Scattering to High Albedo Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1993-01-01

    To test the hypothesis that the independent scattering calculation widely used to model radiative transfer in atmospheres and clouds will give a useful approximation to the intensity and linear polarization of visible light scattered from an optically thick surface of transparent particles, laboratory measurements are compared to the independent scattering calculation for a surface of spherical particles with known optical constants and size distribution. Because the shape, size distribution, and optical constants of the particles are known, the independent scattering calculation is completely determined and the only remaining unknown is the net effect of the close packing of the particles in the laboratory sample surface...

  6. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  7. Multiscale Pore Throat Network Reconstruction of Tight Porous Media Constrained by Mercury Intrusion Capillary Pressure and Nuclear Magnetic Resonance Measurements

    NASA Astrophysics Data System (ADS)

    Xu, R.; Prodanovic, M.

    2017-12-01

    Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable insights into the production optimization and enhanced oil recovery design.

  8. Impact of grid size on uniform scanning and IMPT plans in XiO treatment planning system for brain cancer

    PubMed Central

    Zheng, Yuanshui

    2015-01-01

    The main purposes of this study are to: 1) evaluate the accuracy of XiO treatment planning system (TPS) for different dose calculation grid size based on head phantom measurements in uniform scanning proton therapy (USPT); and 2) compare the dosimetric results for various dose calculation grid sizes based on real computed tomography (CT) dataset of pediatric brain cancer treatment plans generated by USPT and intensity‐modulated proton therapy (IMPT) techniques. For phantom study, we have utilized the anthropomorphic head proton phantom provided by Imaging and Radiation Oncology Core (IROC). The imaging, treatment planning, and beam delivery were carried out following the guidelines provided by the IROC. The USPT proton plan was generated in the XiO TPS, and dose calculations were performed for grid size ranged from 1 to 3 mm. The phantom containing thermoluminescent dosimeter (TLDs) and films was irradiated using uniform scanning proton beam. The irradiated TLDs were read by the IROC. The calculated doses from the XiO for different grid sizes were compared to the measured TLD doses provided by the IROC. Gamma evaluation was done by comparing calculated planar dose distribution of 3 mm grid size with measured planar dose distribution. Additionally, IMPT plan was generated based on the same CT dataset of the IROC phantom, and IMPT dose calculations were performed for grid size ranged from 1 to 3 mm. For comparative purpose, additional gamma analysis was done by comparing the planar dose distributions of standard grid size (3 mm) with that of other grid sizes (1, 1.5, 2, and 2.5 mm) for both the USPT and IMPT plans. For patient study, USPT plans of three pediatric brain cancer cases were selected. IMPT plans were generated for each of three pediatric cases. All patient treatment plans (USPT and IMPT) were generated in the XiO TPS for a total dose of 54 Gy (relative biological effectiveness [RBE]). Treatment plans (USPT and IMPT) of each case was recalculated for grid sizes of 1, 1.5, 2, and 2.5 mm; these dosimetric results were then compared with that of 3 mm grid size. Phantom study results: There was no distinct trend exhibiting the dependence of grid size on dose calculation accuracy when calculated point dose of different grid sizes were compared to the measured point (TLD) doses. On average, the calculated point dose was higher than the measured dose by 1.49% and 2.63% for the right and left TLDs, respectively. The gamma analysis showed very minimal differences among planar dose distributions of various grid sizes, with percentage of points meeting gamma index criteria 1% and 1 mm to be from 97.92% to 99.97%. The gamma evaluation using 2% and 2 mm criteria showed both the IMPT and USPT plans have 100% points meeting the criteria. Patient study results: In USPT, there was no very distinct relationship between the absolute difference in mean planning target volume (PTV) dose and grid size, whereas in IMPT, it was found that the decrease in grid size slightly increased the PTV maximum dose and decreased the PTV mean dose and PTV D50%. For the PTV doses, the average differences were up to 0.35 Gy (RBE) and 1.47 Gy (RBE) in the USPT and IMPT plans, respectively. Dependency on grid size was not very clear for the organs at risk (OARs), with average difference ranged from −0.61 Gy (RBE) to 0.53 Gy (RBE) in the USPT plans and from −0.83 Gy (RBE) to 1.39 Gy (RBE) in the IMPT plans. In conclusion, the difference in the calculated point dose between the smallest grid size (1 mm) and the largest grid size (3 mm) in phantom for USPT was typically less than 0.1%. Patient study results showed that the decrease in grid size slightly increased the PTV maximum dose in both the USPT and IMPT plans. However, no distinct trend was obtained between the absolute difference in dosimetric parameter and dose calculation grid size for the OARs. Grid size has a large effect on dose calculation efficiency, and use of 2 mm or less grid size can increase the dose calculation time significantly. It is recommended to use grid size either 2.5 or 3 mm for dose calculations of pediatric brain cancer plans generated by USPT and IMPT techniques in XiO TPS. PACS numbers: 87.55.D‐, 87.55.ne, 87.55.dk PMID:26699310

  9. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebe, M; Pacaci, P; Mabhouti, H

    Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less

  11. Numerical simulation for the magnetic force distribution in electromagnetic forming of small size flat sheet

    NASA Astrophysics Data System (ADS)

    Chen, Xiaowei; Wang, Wenping; Wan, Min

    2013-12-01

    It is essential to calculate magnetic force in the process of studying electromagnetic flat sheet forming. Calculating magnetic force is the basis of analyzing the sheet deformation and optimizing technical parameters. Magnetic force distribution on the sheet can be obtained by numerical simulation of electromagnetic field. In contrast to other computing methods, the method of numerical simulation has some significant advantages, such as higher calculation accuracy, easier using and other advantages. In this paper, in order to study of magnetic force distribution on the small size flat sheet in electromagnetic forming when flat round spiral coil, flat rectangular spiral coil and uniform pressure coil are adopted, the 3D finite element models are established by software ANSYS/EMAG. The magnetic force distribution on the sheet are analyzed when the plane geometries of sheet are equal or less than the coil geometries under fixed discharge impulse. The results showed that when the physical dimensions of sheet are less than the corresponding dimensions of the coil, the variation of induced current channel width on the sheet will cause induced current crowding effect that seriously influence the magnetic force distribution, and the degree of inhomogeneity of magnetic force distribution is increase nearly linearly with the variation of induced current channel width; the small size uniform pressure coil will produce approximately uniform magnetic force distribution on the sheet, but the coil is easy to early failure; the desirable magnetic force distribution can be achieved when the unilateral placed flat rectangular spiral coil is adopted, and this program can be take as preferred one, because the longevity of flat rectangular spiral coil is longer than the working life of small size uniform pressure coil.

  12. Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.; Bailey, M. E.

    2002-11-01

    We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.

  13. Distribution of joint local and total size and of extension for avalanches in the Brownian force model

    NASA Astrophysics Data System (ADS)

    Delorme, Mathieu; Le Doussal, Pierre; Wiese, Kay Jörg

    2016-05-01

    The Brownian force model is a mean-field model for local velocities during avalanches in elastic interfaces of internal space dimension d , driven in a random medium. It is exactly solvable via a nonlinear differential equation. We study avalanches following a kick, i.e., a step in the driving force. We first recall the calculation of the distributions of the global size (total swept area) and of the local jump size for an arbitrary kick amplitude. We extend this calculation to the joint density of local and global sizes within a single avalanche in the limit of an infinitesimal kick. When the interface is driven by a single point, we find new exponents τ0=5 /3 and τ =7 /4 , depending on whether the force or the displacement is imposed. We show that the extension of a "single avalanche" along one internal direction (i.e., the total length in d =1 ) is finite, and we calculate its distribution following either a local or a global kick. In all cases, it exhibits a divergence P (ℓ ) ˜ℓ-3 at small ℓ . Most of our results are tested in a numerical simulation in dimension d =1 .

  14. First Principles Study of Nanodiamond Optical and Electronic Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raty, J; Galli, G

    2004-10-21

    Nanometer sized diamond has been found in meteorites, proto-planetary nebulae and interstellar dusts, as well as in residues of detonation and in diamond films. Remarkably, the size distribution of diamond nanoparticles appears to be peaked around 2-5 nm, and to be largely independent of preparation conditions. Using ab-initio calculations, we have shown that in this size range nanodiamond has a fullerene-like surface and, unlike silicon and germanium, exhibits very weak quantum confinement effects. We called these carbon nanoparticles bucky-diamonds: their atomic structure, predicted by simulations, is consistent with many experimental findings. In addition, we carried out calculations of the stabilitymore » of nanodiamond which provided a unifying explanation of its size distribution in extra-terrestrial samples, and in ultra-crystalline diamond films.« less

  15. The Time-Dependent Wavelet Spectrum of HH 1 and 2

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; Reipurth, B.; Esquivel, A.; González-Gómez, D.; Riera, A.

    2018-04-01

    We have calculated the wavelet spectra of four epochs (spanning ≍20 yr) of Hα and [S II] HST images of HH 1 and 2. From these spectra we calculated the distribution functions of the (angular) radii of the emission structures. We found that the size distributions have maxima (corresponding to the characteristic sizes of the observed structures) with radii that are logarithmically spaced with factors of ≍2→3 between the successive peaks. The positions of these peaks generally showed small shifts towards larger sizes as a function of time. This result indicates that the structures of HH 1 and 2 have a general expansion (seen at all scales), and/or are the result of a sequence of merging events resulting in the formation of knots with larger characteristic sizes.

  16. Effects of Sediment Patches on Sediment Transport Predictions in Steep Mountain Channels

    NASA Astrophysics Data System (ADS)

    Monsalve Sepulveda, A.; Yager, E.

    2013-12-01

    Bed surface patches occur in most gravel-bedded rivers and in steep streams can be divided between relatively immobile boulders and more mobile patches of cobbles and gravel. This spatial variability in grain size, roughness and sorting impact bed load transport by altering the relative local mobility of different grain sizes and creating complex local flow fields. Large boulders also bear a significant part of the total shear stress and we hypothesize that the remaining shear stress on a given mobile patch is a distribution of values that depend on the local topography, patch type and location relative to the large roughness elements and thalweg. Current sediment transport equations do not account for the variation in roughness, local flow and grain size distributions on and between patches and often use an area-weighted approach to obtain a representative grain size distribution and reach-averaged shear stress. Such equations also do not distinguish between active (patches where at least one grain size is in motion) and inactive patches or include the difference in mobility between patch classes as result of spatial shear stress distributions. To understand the effects of sediment patches on sediment transport in steep channels, we calculated the shear stress distributions over a range of patch classes in a 10% gradient step-pool stream. We surveyed the bed with a high density resolution (every 5 cm in horizontal and vertical directions over a 40 m long reach) using a total station and terrestrial LiDAR, mapped and classified patches by their grain size distributions, and measured water surface elevations and mean velocities for low to moderate flow events. Using these data we calibrated a quasi-three dimensional model (FaSTMECH) to obtain shear stress distributions over each patch for a range of flow discharges. We modified Parker's (1990) equations to use the calculated shear stress distribution, measured grain sizes, and a specific hiding function for each patch class, and then added the bedload fluxes for each patch to calculate the reach-averaged sediment transport rate. Sediment mobility in patches was highly dependent on the patch's class and location relative to the thalweg and large roughness elements. Compared to deterministic formulations, the use of distributions of shear stress improved predictions of bedload transport in steep mountain channels.

  17. Aerosol properties computed from aircraft-based observations during the ACE- Asia campaign. 2; A case study of lidar ratio closure and aerosol radiative effects

    NASA Technical Reports Server (NTRS)

    Kuzmanoski, Maja; Box, M. A.; Schmid, B.; Box, G. P.; Wang, J.; Russell, P. B.; Bates, D.; Jonsson, H. H.; Welton, Ellsworth J.; Flagan, R. C.

    2005-01-01

    For a vertical profile with three distinct layers (marine boundary, pollution and dust), observed during the ACE-Asia campaign, we carried out a comparison between the modeled lidar ratio vertical profile and that obtained from collocated airborne NASA AATS-14 sunphotometer and shipborne Micro-Pulse Lidar (MPL) measurements. Vertically resolved lidar ratio was calculated from two size distribution vertical profiles - one obtained by inversion of sunphotometer-derived extinction spectra, and one measured in-situ - combined with the same refractive index model based on aerosol chemical composition. The aerosol model implies single scattering albedos of 0.78 - 0.81 and 0.93 - 0.96 at 0.523 microns (the wavelength of the lidar measurements), in the pollution and dust layers, respectively. The lidar ratios calculated from the two size distribution profiles have close values in the dust layer; they are however, significantly lower than the lidar ratios derived from combined lidar and sunphotometer measurements, most probably due to the use of a simple nonspherical model with a single particle shape in our calculations. In the pollution layer, the two size distribution profiles yield generally different lidar ratios. The retrieved size distributions yield a lidar ratio which is in better agreement with that derived from lidar/sunphotometer measurements in this layer, with still large differences at certain altitudes (the largest relative difference was 46%). We explain these differences by non-uniqueness of the result of the size distribution retrieval and lack of information on vertical variability of particle refractive index. Radiative transfer calculations for this profile showed significant atmospheric radiative forcing, which occurred mainly in the pollution layer. We demonstrate that if the extinction profile is known then information on the vertical structure of absorption and asymmetry parameter is not significant for estimating forcing at TOA and the surface, while it is of importance for estimating vertical profiles of radiative forcing and heating rates.

  18. Micrometer-scale particle sizing by laser diffraction: critical impact of the imaginary component of refractive index.

    PubMed

    Beekman, Alice; Shan, Daxian; Ali, Alana; Dai, Weiguo; Ward-Smith, Stephen; Goldenberg, Merrill

    2005-04-01

    This study evaluated the effect of the imaginary component of the refractive index on laser diffraction particle size data for pharmaceutical samples. Excipient particles 1-5 microm in diameter (irregular morphology) were measured by laser diffraction. Optical parameters were obtained and verified based on comparison of calculated vs. actual particle volume fraction. Inappropriate imaginary components of the refractive index can lead to inaccurate results, including false peaks in the size distribution. For laser diffraction measurements, obtaining appropriate or "effective" imaginary components of the refractive index was not always straightforward. When the recommended criteria such as the concentration match and the fit of the scattering data gave similar results for very different calculated size distributions, a supplemental technique, microscopy with image analysis, was used to decide between the alternatives. Use of effective optical parameters produced a good match between laser diffraction data and microscopy/image analysis data. The imaginary component of the refractive index can have a major impact on particle size results calculated from laser diffraction data. When performed properly, laser diffraction and microscopy with image analysis can yield comparable results.

  19. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  20. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  1. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Theoretical calculation of the cratering on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.

    2003-07-01

    The main influences on crater size distributions are investigated by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY numerical integrator. An efficient, Öpik-type, collision code enables the distribution of impact velocities and the overall impact probability to be found. When combined with a crater scaling law and an impactor size distribution, using a Monte Carlo method, this yields a crater size distribution. The cratering time-scale is longer for Ida than either Gaspra or Mathilde, though it is harder to constrain for Eros due to the chaotic variation of its orbital elements. The slopes of the crater size distribution are in accord with observations.

  3. Recovering 3D particle size distributions from 2D sections

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Olson, Daniel M.

    2017-03-01

    We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible, and practical method to do this; show which of these techniques gives the most faithful conversions; and provide (online) short computer codes to calculate both 2D-3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter.

  4. Constraining ejecta particle size distributions with light scattering

    NASA Astrophysics Data System (ADS)

    Schauer, Martin; Buttler, William; Frayer, Daniel; Grover, Michael; Lalone, Brandon; Monfared, Shabnam; Sorenson, Daniel; Stevens, Gerald; Turley, William

    2017-06-01

    The angular distribution of the intensity of light scattered from a particle is strongly dependent on the particle size and can be calculated using the Mie solution to Maxwell's equations. For a collection of particles with a range of sizes, the angular intensity distribution will be the sum of the contributions from each particle size weighted by the number of particles in that size bin. The set of equations describing this pattern is not uniquely invertible, i.e. a number of different distributions can lead to the same scattering pattern, but with reasonable assumptions about the distribution it is possible to constrain the problem and extract estimates of the particle sizes from a measured scattering pattern. We report here on experiments using particles ejected by shockwaves incident on strips of triangular perturbations machined into the surface of tin targets. These measurements indicate a bimodal distribution of ejected particle sizes with relatively large particles (median radius 2-4 μm) evolved from the edges of the perturbation strip and smaller particles (median radius 200-600 nm) from the perturbations. We will briefly discuss the implications of these results and outline future plans.

  5. Experimental comparison of various techniques for spot size measurement of high-energy X-ray

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Li, Cheng-Gang; Li, Hong; Long, Quan-Hong; Shi, Jin-Shui; Deng, Jian-Jun

    2016-08-01

    In flash-radiography experiments, the quality of the acquired image strongly depends on the focal size of the X-ray source spot. A variety of techniques based on imaging of the pinhole, the slit and the rollbar are adopted to measure the focal spot size of the Dragon-I linear induction accelerator. The image of the pinhole provides a two-dimensional distribution of the X-ray spot, while those of the slit and the rollbar give a line-spread distribution and an edge-spread distribution, respectively. The spot size characterized by the full-width at half-maximum and that characterized by the LANL definition are calculated for comparison.

  6. Structural and Electronic Properties of Isolated Nanodiamonds: A Theoretical Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raty, J; Galli, G

    2004-09-09

    Nanometer sized diamond has been found in meteorites, proto-planetary nebulae and interstellar dusts, as well as in residues of detonation and in diamond films. Remarkably, the size distribution of diamond nanoparticles appears to be peaked around 2-5 nm, and to be largely independent of preparation conditions. Using ab-initio calculations, we have shown that in this size range nanodiamond has a fullerene-like surface and, unlike silicon and germanium, exhibit very weak quantum confinement effects. We called these carbon nanoparticles bucky-diamonds: their atomic structure, predicted by simulations, is consistent with many experimental findings. In addition, we carried out calculations of the stabilitymore » of nanodiamond which provided a unifying explanation of its size distribution in extra-terrestrial samples, and in ultra-crystalline diamond films. Here we present a summary of our theoretical results and we briefly outline work in progress on doping of nanodiamond with nitrogen.« less

  7. Calculation of the Thermal Resistance of a Heat Distributer in the Cooling System of a Heat-Loaded Element

    NASA Astrophysics Data System (ADS)

    Vasil'ev, E. N.

    2018-04-01

    Numerical simulation is performed for heat transfer in a heat distributer of a thermoelectric cooling system, which is located between the heat-loaded element and the thermoelectric module, for matching their sizes and for heat flux equalization. The dependences of the characteristic values of temperature and thermal resistance of the copper and aluminum heat distributer on its thickness and on the size of the heatloaded element. Comparative analysis is carried out for determining the effect of the thermal conductivity of the material and geometrical parameters on the heat resistance. The optimal thickness of the heat distributer depending on the size of the heat-loaded element is determined.

  8. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Influence of multidroplet size distribution on icing collection efficiency

    NASA Technical Reports Server (NTRS)

    Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.

    1983-01-01

    Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.

  10. Preliminary evaluation of cryogenic two-phase flow imaging using electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Xie, Huangjun; Yu, Liu; Zhou, Rui; Qiu, Limin; Zhang, Xiaobin

    2017-09-01

    The potential application of the 2-D eight-electrode electrical capacitance tomography (ECT) to the inversion imaging of the liquid nitrogen-vaporous nitrogen (LN2-VN2) flow in the tube is theoretically evaluated. The phase distribution of the computational domain is obtained using the simultaneous iterative reconstruction technique with variable iterative step size. The detailed mathematical derivations for the calculations are presented. The calculated phase distribution for the two detached LN2 column case shows the comparable results with the water-air case, regardless of the much reduced dielectric permittivity of LN2 compared with water. The inversion images of total eight different LN2-VN2 flow patterns are presented and quantitatively evaluated by calculating the relative void fraction error and the correlation coefficient. The results demonstrate that the developed reconstruction technique for ECT has the capacity to reconstruct the phase distribution of the complex LN2-VN2 flow, while the accuracy of the inversion images is significantly influenced by the size of the discrete phase. The influence of the measurement noise on the image quality is also considered in the calculations.

  11. Sizing a PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.

    1994-05-01

    A Picture Archiving and Communications System (PACS) must be able to support the image rate of the medical treatment facility. In addition the PACS must have adequate working storage and archive storage capacity required. The calculation of the number of images per minute and the capacity of working storage and of archiving storage is discussed. The calculation takes into account the distribution of images over the different size of radiological images, the distribution between inpatient and outpatient, and the distribution over plain film CR images and other modality images. The support of the indirect clinical image load is difficult to estimate and is considered in some detail. The result of the exercise for a particular hospital is an estimate of the average size of the images and exams on the system, of the number of gigabytes of working storage, of the number of images moved per minute, of the size of the archive in gigabytes, and of the number of images that are to be moved by the archive per minute. The types of storage required to support the image rates and the capacity required are discussed.

  12. Sulfate passivation in the lead-acid system as a capacity limiting process

    NASA Astrophysics Data System (ADS)

    Kappus, W.; Winsel, A.

    1982-10-01

    Calculations of the discharge capacity of Pb and PbO 2 electrodes as a function of various parameters are presented. They are based on the solution-precipitation mechanism for the discharge reaction and its formulation by Winsel et al. A logarithmic pore size distribution is used to fit experimental porosigrams of Pb and PbO 2 electrodes. Based on this pore size distribution the capacity is calculated as a function of current, BET surface, and porosity of the PbSO 4 diaphragm. The PbSO 4 supersaturation as the driving force of the diffusive transport is chosen as a free parameter.

  13. Dermally adhered soil: 2. Reconstruction of dry-sieve particle-size distributions from wet-sieve data.

    PubMed

    Choate, LaDonna M; Ranville, James F; Bunge, Annette L; Macalady, Donald L

    2006-10-01

    In the evaluation of soil particle-size effects on environmental processes, particle-size distributions are measured by either wet or dry sieving. Commonly, size distributions determined by wet and dry sieving differ because some particles disaggregate in water. Whereas the dry-sieve distributions are most relevant to the study of soil adherence to skin, soil can be recovered from skin only by washing with the potential for disaggregation whether or not it is subsequently wet or dry sieved. Thus, the possibility exists that wet-sieving measurements of the particle sizes that adhered to the skin could be skewed toward the smaller fractions. This paper provides a method by which dry-sieve particle-size distributions can be reconstructed from wet-sieve particle-size distributions for the same soil. The approach combines mass balances with a series of experiments in which wet sieving was applied to dry-sieve fractions from the original soil. Unless the soil moisture content is high (i.e., greater than or equal to the water content after equilibration with water-saturated air), only the soil particles of diameters less than about 63 microm adhere to the skin. Because of this, the adhering particle-size distribution calculated using the reconstruction method was not significantly different from the wet-sieving determinations.

  14. Size-exclusion chromatography of perfluorosulfonated ionomers.

    PubMed

    Mourey, T H; Slater, L A; Galipo, R C; Koestner, R J

    2011-08-26

    A size-exclusion chromatography (SEC) method in N,N-dimethylformamide containing 0.1 M LiNO(3) is shown to be suitable for the determination of molar mass distributions of three classes of perfluorosulfonated ionomers, including Nafion(®). Autoclaving sample preparation is optimized to prepare molecular solutions free of aggregates, and a solvent exchange method concentrates the autoclaved samples to enable the use of molar-mass-sensitive detection. Calibration curves obtained from light scattering and viscometry detection suggest minor variation in the specific refractive index increment across the molecular size distributions, which introduces inaccuracies in the calculation of local absolute molar masses and intrinsic viscosities. Conformation plots that combine apparent molar masses from light scattering detection with apparent intrinsic viscosities from viscometry detection partially compensate for the variations in refractive index increment. The conformation plots are consistent with compact polymer conformations, and they provide Mark-Houwink-Sakurada constants that can be used to calculate molar mass distributions without molar-mass-sensitive detection. Unperturbed dimensions and characteristic ratios calculated from viscosity-molar mass relationships indicate unusually free rotation of the perfluoroalkane backbones and may suggest limitations to applying two-parameter excluded volume theories for these ionomers. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Planetarium instructional efficacy: A research synthesis

    NASA Astrophysics Data System (ADS)

    Brazell, Bruce D.

    The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.

  16. Aerosol sampling for the August 7th, and 9th, 1985 SAGE II validation experiment

    NASA Technical Reports Server (NTRS)

    Oberbeck, V. R.; Pueschel, R.; Ferry, G.; Livingston, J.; Fong, W.

    1986-01-01

    Comparisons are made between aerosol size distributions measured by instrumented aircraft and the SAGE II sensor on the ERB satellite performing limb scans of the same atmospheric region. Particle radii ranging from 0.0001-200 microns were detected, with good agreement being obtained between the size distributions detected by impactors and probes at radii over 0.15 micron. The distributions were used to calculate aerosol extinction values which were compared with values from SAGE II scans.

  17. How realistic is the pore size distribution calculated from adsorption isotherms if activated carbon is composed of fullerene-like fragments?

    PubMed

    Terzyk, Artur P; Furmaniak, Sylwester; Harris, Peter J F; Gauden, Piotr A; Włoch, Jerzy; Kowalczyk, Piotr; Rychlicki, Gerhard

    2007-11-28

    A plausible model for the structure of non-graphitizing carbon is one which consists of curved, fullerene-like fragments grouped together in a random arrangement. Although this model was proposed several years ago, there have been no attempts to calculate the properties of such a structure. Here, we determine the density, pore size distribution and adsorption properties of a model porous carbon constructed from fullerene-like elements. Using the method proposed recently by Bhattacharya and Gubbins (BG), which was tested in this study for ideal and defective carbon slits, the pore size distributions (PSDs) of the initial model and two related carbon models are calculated. The obtained PSD curves show that two structures are micro-mesoporous (with different ratio of micro/mesopores) and the third is strictly microporous. Using the grand canonical Monte Carlo (GCMC) method, adsorption isotherms of Ar (87 K) are simulated for all the structures. Finally PSD curves are calculated using the Horvath-Kawazoe, non-local density functional theory (NLDFT), Nguyen and Do, and Barrett-Joyner-Halenda (BJH) approaches, and compared with those predicted by the BG method. This is the first study in which different methods of calculation of PSDs for carbons from adsorption data can be really verified, since absolute (i.e. true) PSDs are obtained using the BG method. This is also the first study reporting the results of computer simulations of adsorption on fullerene-like carbon models.

  18. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  19. Effects of system size and cooling rate on the structure and properties of sodium borosilicate glasses from molecular dynamics simulations.

    PubMed

    Deng, Lu; Du, Jincheng

    2018-01-14

    Borosilicate glasses form an important glass forming system in both glass science and technologies. The structure and property changes of borosilicate glasses as a function of thermal history in terms of cooling rate during glass formation and simulation system sizes used in classical molecular dynamics (MD) simulation were investigated with recently developed composition dependent partial charge potentials. Short and medium range structural features such as boron coordination, Si and B Q n distributions, and ring size distributions were analyzed to elucidate the effects of cooling rate and simulation system size on these structure features and selected glass properties such as glass transition temperature, vibration density of states, and mechanical properties. Neutron structure factors, neutron broadened pair distribution functions, and vibrational density of states were calculated and compared with results from experiments as well as ab initio calculations to validate the structure models. The results clearly indicate that both cooling rate and system size play an important role on the structures of these glasses, mainly by affecting the 3 B and 4 B distributions and consequently properties of the glasses. It was also found that different structure features and properties converge at different sizes or cooling rates; thus convergence tests are needed in simulations of the borosilicate glasses depending on the targeted properties. The results also shed light on the complex thermal history dependence on structure and properties in borosilicate glasses and the protocols in MD simulations of these and other glass materials.

  20. Effects of system size and cooling rate on the structure and properties of sodium borosilicate glasses from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Deng, Lu; Du, Jincheng

    2018-01-01

    Borosilicate glasses form an important glass forming system in both glass science and technologies. The structure and property changes of borosilicate glasses as a function of thermal history in terms of cooling rate during glass formation and simulation system sizes used in classical molecular dynamics (MD) simulation were investigated with recently developed composition dependent partial charge potentials. Short and medium range structural features such as boron coordination, Si and B Qn distributions, and ring size distributions were analyzed to elucidate the effects of cooling rate and simulation system size on these structure features and selected glass properties such as glass transition temperature, vibration density of states, and mechanical properties. Neutron structure factors, neutron broadened pair distribution functions, and vibrational density of states were calculated and compared with results from experiments as well as ab initio calculations to validate the structure models. The results clearly indicate that both cooling rate and system size play an important role on the structures of these glasses, mainly by affecting the 3B and 4B distributions and consequently properties of the glasses. It was also found that different structure features and properties converge at different sizes or cooling rates; thus convergence tests are needed in simulations of the borosilicate glasses depending on the targeted properties. The results also shed light on the complex thermal history dependence on structure and properties in borosilicate glasses and the protocols in MD simulations of these and other glass materials.

  1. Application of SAXS and SANS in evaluation of porosity, pore size distribution and surface area of coal

    USGS Publications Warehouse

    Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.

    2004-01-01

    This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.

  2. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    USGS Publications Warehouse

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  3. The influence of the dose calculation resolution of VMAT plans on the calculated dose for eye lens and optic pathway.

    PubMed

    Park, Jong Min; Park, So-Yeon; Kim, Jung-In; Carlson, Joel; Kim, Jin Ho

    2017-03-01

    To investigate the effect of dose calculation grid on calculated dose-volumetric parameters for eye lenses and optic pathways. A total of 30 patients treated using the volumetric modulated arc therapy (VMAT) technique, were retrospectively selected. For each patient, dose distributions were calculated with calculation grids ranging from 1 to 5 mm at 1 mm intervals. Identical structures were used for VMAT planning. The changes in dose-volumetric parameters according to the size of the calculation grid were investigated. Compared to dose calculation with 1 mm grid, the maximum doses to the eye lens with calculation grids of 2, 3, 4 and 5 mm increased by 0.2 ± 0.2 Gy, 0.5 ± 0.5 Gy, 0.9 ± 0.8 Gy and 1.7 ± 1.5 Gy on average, respectively. The Spearman's correlation coefficient between dose gradients near structures vs. the differences between the calculated doses with 1 mm grid and those with 5 mm grid, were 0.380 (p < 0.001). For the accurate calculation of dose distributions, as well as efficiency, using a grid size of 2 mm appears to be the most appropriate choice.

  4. Impact and explosion crater ejecta, fragment size, and velocity

    NASA Technical Reports Server (NTRS)

    Okeefe, J. D.; Ahrens, T. J.

    1983-01-01

    A model was developed for the mass distribution of fragments that are ejected at a given velocity for impact and explosion craters. The model is semi-empirical in nature and is derived from (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter and an assumption on the functional form for the distribution of fragements ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity are nearly monodisperse, e.g., 20% of the mass of the ejecta at a given velocity contain fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. Using this model, the largest fragment that can be ejected from asteroids, the moon, Mars, and Earth is calculated as a function of crater diameter. In addition, the internal energy of ejecta versus ejecta velocity is found. The internal energy of fragments having velocities exceeding the escape velocity of the moon will exceed the energy required for incipient melting for solid silicates and thus, constrains the maximum ejected solid fragment size.

  5. SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varadhan; Way, S; Arentsen, L

    2016-06-15

    Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distancemore » and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.« less

  6. Neutron and weak-charge distributions of the 48Ca nucleus

    DOE PAGES

    Hagen, Gaute; Forssen, Christian; Nazarewicz, Witold; ...

    2015-11-02

    What is the size of the atomic nucleus? This deceivably simple question is difficult to answer. Although the electric charge distributions in atomic nuclei were measured accurately already half a century ago, our knowledge of the distribution of neutrons is still deficient. In addition to constraining the size of atomic nuclei, the neutron distribution also impacts the number of nuclei that can exist and the size of neutron stars. We present an ab initio calculation of the neutron distribution of the neutron-rich nucleus 48Ca. We show that the neutron skin (difference between the radii of the neutron and proton distributions)more » is significantly smaller than previously thought. We also make predictions for the electric dipole polarizability and the weak form factor; both quantities that are at present targeted by precision measurements. Here, based on ab initio results for 48Ca, we provide a constraint on the size of a neutron star.« less

  7. Size Effect on Specific Energy Distribution in Particle Comminution

    NASA Astrophysics Data System (ADS)

    Xu, Yongfu; Wang, Yidong

    A theoretical study is made to derive an energy distribution equation for the size reduction process from the fractal model for the particle comminution. Fractal model is employed as a valid measure of the self-similar size distribution of comminution daughter products. The tensile strength of particles varies with particle size in the manner of a power function law. The energy consumption for comminuting single particle is found to be proportional to the 5(D-3)/3rd order of the particle size, D being the fractal dimension of particle comminution daughter. The Weibull statistics is applied to describe the relationship between the breakage probability and specific energy of particle comminution. A simple equation is derived for the breakage probability of particles in view of the dependence of fracture energy on particle size. The calculated exponents and Weibull coefficients are generally in conformity with published data for fracture of particles.

  8. CASCADE IMPACTOR DATA REDUCTION WITH SR-52 AND TI-59 PROGRAMMABLE CALCULATORS

    EPA Science Inventory

    The report provides useful tools for obtaining particle size distributions and graded penetration data from cascade impactor measurements. The programs calculate impactor aerodynamic cut points, total mass collected by the impactor, cumulative mass fraction less than for each sta...

  9. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  10. Calculation of ionized fields in DC electrostatic precipitators in the presence of dust and electric wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristina, S.; Feliziani, M.

    1995-11-01

    This paper describes a new procedure for the numerical computation of the electric field and current density distributions in a dc electrostatic precipitator in the presence of dust, taking into account the particle-size distribution. Poisson`s and continuity equations are numerically solved by supposing that the coronating conductors satisfy Kaptzov`s assumption on the emitter surfaces. Two iterative numerical procedures, both based on the finite element method (FEM), are implemented for evaluating, respectively, the unknown ionic charge density and the particle charge density distributions. The V-I characteristic and the precipitation efficiencies for the individual particle-size classes, calculated with reference to the pilotmore » precipitator installed by ENEL (Italian Electricity Board) at its Marghera (Venice) coal-fired power station, are found to be very close to those measured experimentally.« less

  11. Determination of optical parameters of atmospheric particulates from ground-based polarimeter measurements

    NASA Technical Reports Server (NTRS)

    Kuriyan, J. G.; Phillips, D. H.; Willson, R. C.

    1974-01-01

    This paper describes the theoretical analysis that is required to infer, from polarimeter measurements of skylight, the size distribution, refractive index and abundance of particulates in the atmosphere. To illustrate the viability of the method, some data obtained at UCLA is analyzed and the atmospheric parameters are derived. The explicit demonstration of the redundancy in the description of aerosol distributions suggests that radiation field measurements will not uniquely determine the modal radius of the size distribution. In spite of this nonuniqueness information useful to heat budget calculations can be derived.

  12. Airborne Aerosol Closure Studies During PRIDE

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Livingston, John M.; Russell, Philip B.; Schmid, Beat; Reid, Jeff

    2000-01-01

    The Puerto Rico Dust Experiment (PRIDE) was conducted during June/July of 2000 to study the properties of Saharan dust aerosols transported across the Atlantic Ocean to the Caribbean Islands. During PRIDE, the NASA Ames Research Center six-channel (380 - 1020 nm) airborne autotracking sunphotometer (AATS-6) was operated aboard a Piper Navajo airplane alongside a suite of in situ aerosol instruments. The in situ aerosol instrumentation relevant to this paper included a Forward Scattering Spectrometer Probe (FSSP-100) and a Passive Cavity Aerosol Spectrometer Probe (PCASP), covering the radius range of approx. 0.05 to 10 microns. The simultaneous and collocated measurement of multi-spectral aerosol optical depth and in situ particle size distribution data permits a variety of closure studies. For example, vertical profiles of aerosol optical depth obtained during local aircraft ascents and descents can be differentiated with respect to altitude and compared to extinction profiles calculated using the in situ particle size distribution data (and reasonable estimates of the aerosol index of refraction). Additionally, aerosol extinction (optical depth) spectra can be inverted to retrieve estimates of the particle size distributions, which can be compared directly to the in situ size distributions. In this paper we will report on such closure studies using data from a select number of vertical profiles at Cabras Island, Puerto Rico, including measurements in distinct Saharan Dust Layers. Preliminary results show good agreement to within 30% between mid-visible aerosol extinction derived from the AATS-6 optical depth profiles and extinction profiles forward calculated using 60s-average in situ particle size distributions and standard Saharan dust aerosol refractive indices published in the literature. In agreement with tendencies observed in previous studies, our initial results show an underestimate of aerosol extinction calculated based on the in situ size distributions relative to the extinction obtained from the sunphotometer measurements. However, a more extensive analysis of all available AATS-6 and in situ size distribution data is necessary to ascertain whether the preliminary results regarding the degree of extinction closure is representative of the entire range of dust conditions encountered in PRIDE. Finally, we will compare the spectral extinction measurements obtained in PRIDE to similar data obtained in Saharan dust layers encountered above the Canary Islands during ACE-2 (Aerosol Characterization Experiment) in July 1997. Thus, the evolution of Saharan dust spectral properties during its transport across the Atlantic can be investigated, provided the dust origin and microphysical properties are found to be comparable.

  13. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, M; Seuntjens, J; Roberge, D

    Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less

  14. The Impact of the Grid Size on TomoTherapy for Prostate Cancer

    PubMed Central

    Kawashima, Motohiro; Kawamura, Hidemasa; Onishi, Masahiro; Takakusagi, Yosuke; Okonogi, Noriyuki; Okazaki, Atsushi; Sekihara, Tetsuo; Ando, Yoshitaka; Nakano, Takashi

    2017-01-01

    Discretization errors due to the digitization of computed tomography images and the calculation grid are a significant issue in radiation therapy. Such errors have been quantitatively reported for a fixed multifield intensity-modulated radiation therapy using traditional linear accelerators. The aim of this study is to quantify the influence of the calculation grid size on the dose distribution in TomoTherapy. This study used ten treatment plans for prostate cancer. The final dose calculation was performed with “fine” (2.73 mm) and “normal” (5.46 mm) grid sizes. The dose distributions were compared from different points of view: the dose-volume histogram (DVH) parameters for planning target volume (PTV) and organ at risk (OAR), the various indices, and dose differences. The DVH parameters were used Dmax, D2%, D2cc, Dmean, D95%, D98%, and Dmin for PTV and Dmax, D2%, and D2cc for OARs. The various indices used were homogeneity index and equivalent uniform dose for plan evaluation. Almost all of DVH parameters for the “fine” calculations tended to be higher than those for the “normal” calculations. The largest difference of DVH parameters for PTV was Dmax and that for OARs was rectal D2cc. The mean difference of Dmax was 3.5%, and the rectal D2cc was increased up to 6% at the maximum and 2.9% on average. The mean difference of D95% for PTV was the smallest among the differences of the other DVH parameters. For each index, whether there was a significant difference between the two grid sizes was determined through a paired t-test. There were significant differences for most of the indices. The dose difference between the “fine” and “normal” calculations was evaluated. Some points around high-dose regions had differences exceeding 5% of the prescription dose. The influence of the calculation grid size in TomoTherapy is smaller than traditional linear accelerators. However, there was a significant difference. We recommend calculating the final dose using the “fine” grid size. PMID:28974860

  15. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  16. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  17. Lunar soils grain size catalog

    NASA Technical Reports Server (NTRS)

    Graf, John C.

    1993-01-01

    This catalog compiles every available grain size distribution for Apollo surface soils, trench samples, cores, and Luna 24 soils. Original laboratory data are tabled, and cumulative weight distribution curves and histograms are plotted. Standard statistical parameters are calculated using the method of moments. Photos and location comments describe the sample environment and geological setting. This catalog can help researchers describe the geotechnical conditions and site variability of the lunar surface essential to the design of a lunar base.

  18. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  19. Calculations of the variability of ice cloud radiative properties at selected solar wavelengths

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Zdunkowski, W. G.; Cox, S. K.

    1980-01-01

    This study shows that there is surprising little difference in values of reflectance, absorptance, and transmittance for many of the intermediate-size particle spectra. Particle size distributions with mode radii ranging from approximately 50 to 300 microns, irrespective of particle shape and nearly independent of the choice of size distribution representation, give relatively similar flux values. The very small particle sizes, however, have significantly larger values of reflectance and transmittance with corresponding smaller values of absorptance than do the larger particle sizes. The very large particle modes produce very small values of reflectance and transmittance along with very large values of absorptance. Such variations are particularly noticeable when plotted as a function of wavelength.

  20. Size distributions and aerodynamic equivalence of metal chondrules and silicate chondrules in Acfer 059

    NASA Technical Reports Server (NTRS)

    Skinner, William R.; Leenhouts, James M.

    1993-01-01

    The CR2 chondrite Acfer 059 is unusual in that the original droplet shapes of metal chondrules are well preserved. We determined separate size distributions for metal chondrules and silicate chondrules; the two types are well sorted and have similar size distributions about their respective mean diameters of 0.74 mm and 1.44 mm. These mean values are aerodynamically equivalent for the contrasting densities, as shown by calculated terminal settling velocities in a model solar nebula. Aerodynamic equivalence and similarity of size distributions suggest that metal and silicate fractions experienced the same sorting process before they were accreted onto the parent body. These characteristics, together with depletion of iron in Acfer 059 and essentially all other chondrites relative to primitive CI compositions, strongly suggest that sorting in the solar nebula involved a radial aerodynamic component and that sorting and siderophile depletion in chondrites are closely related.

  1. User's Guide to Galoper: A Program for Simulating the Shapes of Crystal Size Distributions from Growth Mechanisms - and Associated Programs

    USGS Publications Warehouse

    Eberl, Dennis D.; Drits, V.A.; Srodon, J.

    2000-01-01

    GALOPER is a computer program that simulates the shapes of crystal size distributions (CSDs) from crystal growth mechanisms. This manual describes how to use the program. The theory for the program's operation has been described previously (Eberl, Drits, and Srodon, 1998). CSDs that can be simulated using GALOPER include those that result from growth mechanisms operating in the open system, such as constant-rate nucleation and growth, nucleation with a decaying nucleation rate and growth, surface-controlled growth, supply-controlled growth, and constant-rate and random growth; and those that result from mechanisms operating in the closed system such as Ostwald ripening, random ripening, and crystal coalescence. In addition, CSDs for two types weathering reactions can be simulated. The operation of associated programs also is described, including two statistical programs used for comparing calculated with measured CSDs, a program used for calculating lognormal CSDs, and a program for arranging measured crystal sizes into size groupings (bins).

  2. Determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer quantum dots via spectral analysis of optical signature of the Aharanov-Bohm excitons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haojie; Dhomkar, Siddharth; Roy, Bidisha

    2014-10-28

    For submonolayer quantum dot (QD) based photonic devices, size and density of QDs are critical parameters, the probing of which requires indirect methods. We report the determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer QDs, based on spectral analysis of the optical signature of Aharanov-Bohm (AB) excitons, complemented by photoluminescence studies, secondary-ion mass spectroscopy, and numerical calculations. Numerical calculations are employed to determine the AB transition magnetic field as a function of the type-II QD radius. The study of four samples grown with different tellurium fluxes shows that the lateral size of QDs increases by just 50%, evenmore » though tellurium concentration increases 25-fold. Detailed spectral analysis of the emission of the AB exciton shows that the QD radii take on only certain values due to vertical correlation and the stacked nature of the QDs.« less

  3. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  4. A Broadband Microwave Radiometer Technique at X-band for Rain and Drop Size Distribution Estimation

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    2005-01-01

    Radiometric brightess temperatures below about 12 GHz provide accurate estimates of path attenuation through precipitation and cloud water. Multiple brightness temperature measurements at X-band frequencies can be used to estimate rainfall rate and parameters of the drop size distribution once correction for cloud water attenuation is made. Employing a stratiform storm model, calculations of the brightness temperatures at 9.5, 10 and 12 GHz are used to simulate estimates of path-averaged median mass diameter, number concentration and rainfall rate. The results indicate that reasonably accurate estimates of rainfall rate and information on the drop size distribution can be derived over ocean under low to moderate wind speed conditions.

  5. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    NASA Astrophysics Data System (ADS)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  6. Shipboard Sunphotometer Measurements of Aerosol Optical Depth Spectra and Columnar Water Vapor During ACE-2 and Comparison with Selected Land, Ship, Aircraft, and Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Livingston, John M.; Kapustin, Vladimir N.; Schmid, Beat; Russell, Philip B.; Quinn, Patricia K.; Bates, Timothy S.; Durkee, Philip A.; Smith, Peter J.; Freudenthaler, Volker; Wiegner, Matthias; hide

    2000-01-01

    Analyses of aerosol optical depth (AOD) and colurnmn water vapor (CWV) measurements acquired with NASA Ames Research Center's 6-channel Airborne Tracking Sunphotometer (AATS-6) operated aboard the R/V Professor Vodyanitskiy during the 2nd Aerosol Characterization Experiment (ACE-2) are discussed. Data are compared with various in situ and remote measurements for selected cases. The focus is on 10 July, when the Pelican airplane flew within 70 km of the ship near the time of a NOAA-14/AVHRR satellite overpass and AOD measurements with the 14-channel Ames Airborne Tracking Sunphotometer (AATS-14) above the marine boundary layer (MBL) permitted calculation of AOD within the MBL from the AATS-6 measurements. A detailed column closure test is performed for MBL AOD on 10 July by comparing the AATS-6 MBL AODs with corresponding values calculated by combining shipboard particle size distribution measurements with models of hygroscopic growth and radiosonde humidity profiles (plus assumptions on the vertical profile of the dry particle size distribution and composition). Large differences (30-80% in the mid-visible) between measured and reconstructed AODs are obtained, in large part because of the high sensitivity of the closure methodology to hygroscopic growth models, which vary considerably and have not been validated over the necessary range of particle size/composition distributions. The wavelength dependence of AATS-6 AODs is compared with the corresponding dependence of aerosol extinction calculated from shipboard measurements of aerosol size distribution and of total scattering mearured by a shipboard integrating nephelometer for several days. Results are highly variable, illustrating further the great difficulty of deriving column values from point measurements. AATS-6 CWV values are shown to agree well with corresponding values derived from radiosonde measurements during 8 soundings on 7 days and also with values calculated from measurements taken on 10 July with the AATS-14 and the University of Washington Passive Humidigraph aboard the Pelican.

  7. Theoretical analysis of the influence of aerosol size distribution and physical activity on particle deposition pattern in human lungs.

    PubMed

    Voutilainen, Arto; Kaipio, Jari P; Pekkanen, Juha; Timonen, Kirsi L; Ruuskanen, Juhani

    2004-01-01

    A theoretical comparison of modeled particle depositions in the human respiratory tract was performed by taking into account different particle number and mass size distributions and physical activity in an urban environment. Urban-air data on particulate concentrations in the size range 10 nm-10 microm were used to estimate the hourly average particle number and mass size distribution functions. The functions were then combined with the deposition probability functions obtained from a computerized ICRP 66 deposition model of the International Commission on Radiological Protection to calculate the numbers and masses of particles deposited in five regions of the respiratory tract of a male adult. The man's physical activity and minute ventilation during the day were taken into account in the calculations. Two different mass and number size distributions of aerosol particles with equal (computed) <10 microm particle mass concentrations gave clearly different deposition patterns in the central and peripheral regions of the human respiratory tract. The deposited particle numbers and masses were much higher during the day (0700-1900) than during the night (1900-0700) because an increase in physical activity and ventilation were temporally associated with highly increased traffic-derived particles in urban outdoor air. In future analyses of the short-term associations between particulate air pollution and health, it would not only be important to take into account the outdoor-to-indoor penetration of different particle sizes and human time-activity patterns, but also actual lung deposition patterns and physical activity in significant microenvironments.

  8. An Atomic Lens Using a Focusing Hollow Beam

    NASA Astrophysics Data System (ADS)

    Xia, Yong; Yin, Jian-Ping; Wang, Yu-Zhu

    2003-05-01

    We propose a new method to generate a focused hollow laser beam by using an azimuthally distributed 2pi-phase plate and a convergent thin lens, and calculate the intensity distribution of the focused hollow beam in free propagation space. The relationship between the waist wo of the incident collimated Gaussian beam and the dark spot size of the focused hollow beam at the focal point, and the relationship between the focal length f of the thin lens and the dark spot size are studied respectively. The optical potential of the blue-detuned focused hollow beam for 85Rb atoms is calculated. Our study shows that when the larger waist w of the incident Gaussian beam and the shorter focal length f of the lens are chosen, we can obtain an extremely small dark spot size of the focused hollow beam, which can be used to form an atomic lens with a resolution of several angstroms.

  9. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  10. Microstructure characterization of 316L deformed at high strain rates using EBSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yvell, K., E-mail: kyv@du.se

    2016-12-15

    Specimens from split Hopkinson pressure bar experiments, at strain rates between ~ 1000–9000 s{sup −1} at room temperature and 500 °C, have been studied using electron backscatter diffraction. No significant differences in the microstructures were observed at different strain rates, but were observed for different strains and temperatures. Size distribution for subgrains with boundary misorientations > 2° can be described as a bimodal lognormal area distribution. The distributions were found to change due to deformation. Part of the distribution describing the large subgrains decreased while the distribution for the small subgrains increased. This is in accordance with deformation being heterogeneousmore » and successively spreading into the undeformed part of individual grains. The variation of the average size for the small subgrain distribution varies with strain but not with strain rate in the tested interval. The mean free distance for dislocation slip, interpreted here as the average size of the distribution of small subgrains, displays a variation with plastic strain which is in accordance with the different stages in the stress-strain curves. The rate of deformation hardening in the linear hardening range is accurately calculated using the variation of the small subgrain size with strain. - Highlights: •Only changes in strain, not strain rate, gave differences in the microstructure. •A bimodal lognormal size distribution was found to describe the size distribution. •Variation of the subgrain fraction sizes agrees with models for heterogeneous slip. •Variation of subgrain size with strain describes part of the stress strain curve.« less

  11. Effects of Hyperfine Particles on Reflectance Spectra from 0.3 to 25 μm

    NASA Astrophysics Data System (ADS)

    Mustard, John F.; Hays, John E.

    1997-01-01

    Fine grained particles <50 μm in size dominate particle size distributions of many planetary surfaces. Despite the predominance of fine particles in planetary regoliths, there have been few investigations of the systematic effects of the finest particles on reflectance spectra, and on the ability of quantitative models to extract compositional and/or textural information from remote observations. The effects of fine particles that are approximately the same size as the wavelength of light on reflectance spectra were investigated using narrow particle size separates of the minerals olivine and quartz across the wavelength range 0.3 to 25 μm. The minerals were ground with a mortar and pestle and sieved into five particle size separates of 5-μm intervals from <5 μm to 20-25 μm. The exact particle size distributions were determined with a particle size analyzer and are shown to be Gaussian about a mean within the range of each sieve separate. The reflectance spectra, obtained using a combination of a bidirectional reflectance spectrometer and an FTIR, exhibited a number of systematic changes as the particle size decreased to become approximately the same size and smaller than the wavelength. In the region of volume scattering, the spectra exhibited a sharp drop in reflectance with the finest particle size separates. Christiansen features became saturated when the imaginary part of the index of refraction was non-negligible, while the restrahlen bands showed continuous decrease in spectral contrast and some change in the shape of the bands with decreasing particle size, though the principal features diagnostic of composition were relatively unaffected. The transparency features showed several important changes with decreasing particle size: the spectral contrast increased then decreased, the position of the maximum reflectance of the transparency features shifted systematically to shorter wavelengths, and the symmetry of the features changed. Mie theory predicts that the extinction and scattering efficiencies should decline rapidly when particle size and wavelength are approximately equal. Using these relationships, a critical diameter where this change is predicted to occur was calculated as a function of wavelength and shown to be effective for explaining qualitatively the observed changes. Each of the mineral particle size series were then modeled quantitatively using Mie calculations to determine single-scattering albedo and a Hapke model to calculate reflectance. The models include the complex indices of refraction for olivine and quartz and the exact particle size distributions. The olivine particle size series was well modeled by these calculations, and correctly reproduced the systematic changes in the volume scattering region, the Christiansen feature, restrahlen bands, and transparency features. The quartz particle size series were less well modeled, with the greatest discrepancies in the restrahlen bands and the overall spectral contrast.

  12. Size distribution of radon daughter particles in uranium mine atmospheres.

    PubMed

    George, A C; Hinchliffe, L; Sladowski, R

    1975-06-01

    The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 mum to 0.3 mum with a mean value of 0.17 mum. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines.

  13. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  14. Influence of particle size distribution on reflected and transmitted light from clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-05-01

    The light reflected and transmitted from clouds with various drop size distributions is calculated by a Monte Carlo technique. Six different models are used for the drop size distribution: isotropic, Rayleigh, haze continental, haze maritime, cumulus, and nimbostratus. The scattering function for each model is calculated from the Mie theory. In general, the reflected and transmitted radiances for the isotropic and Rayleigh models tend to be similar, as are those for the various haze and cloud models. The reflected radiance is less for the haze and cloud models than for the isotropic and Rayleigh models/except for an angle of incidence near the horizon when it is larger around the incident beam direction. The transmitted radiance is always much larger for the haze and cloud models near the incident direction; at distant angles it is less for small and moderate optical thicknesses and greater for large optical thicknesses (all comparisons to isotropic and Rayleigh models). The downward flux, cloud albedo, and ean optical path are discussed. The angular spread of the beam as a function of optical thickness is shown for the nimbostratus model.

  15. Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers

    NASA Astrophysics Data System (ADS)

    Febres, Mijail; Legendre, Dominique

    2018-04-01

    The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.

  16. Physical properties of macromolecule-metal oxide nanoparticle complexes: Magnetophoretic mobility, sizes, and interparticle potentials

    NASA Astrophysics Data System (ADS)

    Mefford, Olin Thompson, IV

    Magnetic nanoparticles coated with polymers hold great promise as materials for applications in biotechnology. In this body of work, magnetic fluids for the treatment of retinal detachment are examined closely in three regimes; motion of ferrofluid droplets in aqueous media, size analysis of the polymer-iron oxide nanoparticles, and calculation of interparticle potentials as a means for predicting fluid stability. The macromolecular ferrofluids investigated herein are comprised of magnetite nanoparticles coated with tricarboxylate-functional polydimethylsiloxane (PDMS) oligomers. The nanoparticles were formed by reacting stoichiometric concentrations of iron chloride salts with base. After the magnetite particles were prepared, the functional PDMS oligomers were adsorbed onto the nanoparticle surfaces. The motion of ferrofluid droplets in aqueous media was studied using both theoretical modeling and experimental verification. Droplets (˜1-2 mm in diameter) of ferrofluid were moved through a viscous aqueous medium by an external magnet of measured field and field gradient. Theoretical calculations were made to approximate the forces on the droplet. Using the force calculations, the times required for the droplet to travel across particular distances were estimated. These estimated times were within close approximation of experimental values. Characterization of the sizes of the nanoparticles was particularly important, since the size of the magnetite core affects the magnetic properties of the system, as well as the long-term stability of the nanoparticles against flocculation. Transmission electron microscopy (TEM) was used to measure the sizes and size distributions of the magnetite cores. Image analyses were conducted on the TEM micrographs to measure the sizes of approximately 6000 particles per sample. Distributions of the diameters of the magnetite cores were determined from this data. A method for calculating the total particle size, including the magnetite core and the adsorbed polymer, in organic dispersions was established. These estimated values were compared to measurements of the entire complex utilizing dynamic light scattering (DLS). Better agreement was found for narrow particle size distributions as opposed to broader distribution. The stability against flocculation of the complexes over time in organic media were examined via modified Derjaguin-Landau-Verwey-Overbeek (DLVO) calculations. DLVO theory allows for predicting the total particle-particle interaction potentials, which include steric and electrostatic repulsions as well as van der Waals and magnetic attractions. The interparticle potentials can be determined as a function of separation of the particle surfaces. At a constant molecular weight of the polymer dispersion stabilizer, these calculations indicated that dispersions of smaller PDMS-magnetite particles should be more stable than those containing larger particles. The rheological characteristics of neat magnetite-PDMS complexes (i.e., no solvent or carrier fluid were present) were measured over time in the absence of an applied magnetic field to probe the expected properties upon storage. The viscosity of a neat ferrofluid increased over the course of a month, indicating that some aggregation occurred. However, this effect could be removed by shearing the fluids at a high rate. This suggests that the particles do not irreversibly flocculate under these conditions.

  17. Investigation of the Specht density estimator

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Rydl, L. M.

    1971-01-01

    The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.

  18. Spatial Distribution of Bed Particles in Natural Boulder-Bed Streams

    NASA Astrophysics Data System (ADS)

    Clancy, K. F.; Prestegaard, K. L.

    2001-12-01

    The Wolman pebble count is used to obtain the size distribution of bed particles in natural streams. Statistics such as median particle size (D50) are used in resistance calculations. Additional information such as bed particle heterogeneity may also be obtained from the particle distribution, which is used to predict sediment transport rates (Hey, 1979), (Ferguson, Prestegaard, Ashworth, 1989). Boulder-bed streams have an extreme range of particles in the particle size distribution ranging from sand size particles to particles larger than 0.5-m. A study of a natural boulder-bed reach demonstrated that the spatial distribution of the particles is a significant factor in predicting sediment transport and stream bed and bank stability. Further experiments were performed to test the limits of the spatial distribution's effect on sediment transport. Three stream reaches 40-m in length were selected with similar hydrologic characteristics and spatial distributions but varying average size particles. We used a grid 0.5 by 0.5-m and measured four particles within each grid cell. Digital photographs of the streambed were taken in each grid cell. The photographs were examined using image analysis software to obtain particle size and position of the largest particles (D84) within the reach's particle distribution. Cross section, topography and stream depth were surveyed. Velocity and velocity profiles were measured and recorded. With these data and additional surveys of bankfull floods, we tested the significance of the spatial distributions as average particle size decreases. The spatial distribution of streambed particles may provide information about stream valley formation, bank stability, sediment transport, and the growth rate of riparian vegetation.

  19. Energetics and Self-Assembly of Amphipathic Peptide Pores in Lipid Membranes

    PubMed Central

    Zemel, Assaf; Fattal, Deborah R.; Ben-Shaul, Avinoam

    2003-01-01

    We present a theoretical study of the energetics, equilibrium size, and size distribution of membrane pores composed of electrically charged amphipathic peptides. The peptides are modeled as cylinders (mimicking α-helices) carrying different amounts of charge, with the charge being uniformly distributed over a hydrophilic face, defined by the angle subtended by polar amino acid residues. The free energy of a pore of a given radius, R, and a given number of peptides, s, is expressed as a sum of the peptides' electrostatic charging energy (calculated using Poisson-Boltzmann theory), and the lipid-perturbation energy associated with the formation of a membrane rim (which we model as being semitoroidal) in the gap between neighboring peptides. A simple phenomenological model is used to calculate the membrane perturbation energy. The balance between the opposing forces (namely, the radial free energy derivatives) associated with the electrostatic free energy that favors large R, and the membrane perturbation term that favors small R, dictates the equilibrium properties of the pore. Systematic calculations are reported for circular pores composed of various numbers of peptides, carrying different amounts of charge (1–6 elementary, positive charges) and characterized by different polar angles. We find that the optimal R's, for all (except, possibly, very weakly) charged peptides conform to the “toroidal” pore model, whereby a membrane rim larger than ∼1 nm intervenes between neighboring peptides. Only weakly charged peptides are likely to form “barrel-stave” pores where the peptides essentially touch one another. Treating pore formation as a two-dimensional self-assembly phenomenon, a simple statistical thermodynamic model is formulated and used to calculate pore size distributions. We find that the average pore size and size polydispersity increase with peptide charge and with the amphipathic polar angle. We also argue that the transition of peptides from the adsorbed to the inserted (membrane pore) state is cooperative and thus occurs rather abruptly upon a change in ambient conditions. PMID:12668433

  20. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  1. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    PubMed

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  2. Application of the graphics processor unit to simulate a near field diffraction

    NASA Astrophysics Data System (ADS)

    Zinchik, Alexander A.; Topalov, Oleg K.; Muzychenko, Yana B.

    2017-06-01

    For many years, computer modeling program used for lecture demonstrations. Most of the existing commercial software, such as Virtual Lab, LightTrans GmbH company are quite expensive and have a surplus capabilities for educational tasks. The complexity of the diffraction demonstrations in the near zone, due to the large amount of calculations required to obtain the two-dimensional distribution of the amplitude and phase. At this day, there are no demonstrations, allowing to show the resulting distribution of amplitude and phase without much time delay. Even when using Fast Fourier Transform (FFT) algorithms diffraction calculation speed in the near zone for the input complex amplitude distributions with size more than 2000 × 2000 pixels is tens of seconds. Our program selects the appropriate propagation operator from a prescribed set of operators including Spectrum of Plane Waves propagation and Rayleigh-Sommerfeld propagation (using convolution). After implementation, we make a comparison between the calculation time for the near field diffraction: calculations made on GPU and CPU, showing that using GPU for calculations diffraction pattern in near zone does increase the overall speed of algorithm for an image of size 2048×2048 sampling points and more. The modules are implemented as separate dynamic-link libraries and can be used for lecture demonstrations, workshops, selfstudy and students in solving various problems such as the phase retrieval task.

  3. Reynolds number scaling to predict droplet size distribution in dispersed and undispersed subsurface oil releases.

    PubMed

    Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei

    2016-12-15

    This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  5. The size distributions of fragments ejected at a given velocity from impact craters

    NASA Technical Reports Server (NTRS)

    O'Keefe, John D.; Ahrens, Thomas J.

    1987-01-01

    The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.

  6. Self-organization of the magnetization in ferromagnetic nanowires

    NASA Astrophysics Data System (ADS)

    Ivanov, A. A.; Orlov, V. A.

    2017-10-01

    In this work we demonstrate the occurrence of the characteristic spatial scale in the distribution of magnetization unrelated to the domain wall or crystallite size with using computer simulation of magnetization in a polycrystalline ferromagnetic nanowire. This is the stochastic domain size. We show that this length is included in the spectral density of the pinning force of domain wall on inhomogeneities of the crystallographic anisotropy. The constant and distribution of easy axes directions of the effective anisotropy of stochastic domain, are analytically calculated.

  7. A catalogue of normalized intensity functions and polarization from a cloud of particles with a size distribution of alpha to the minus 4th power

    NASA Technical Reports Server (NTRS)

    Craven, P. D.; Gary, G. A.

    1972-01-01

    The Mie theory of light scattering by spheres was used to calculate the scattered intensity functions resulting from single scattering in a polydispersed collection of spheres. The distribution used behaves according to the inverse fourth power law; graphs and tables for the angular dependence of the intensity and polarization for this law are given. The effects of the particle size range and the integration increment are investigated.

  8. The Development of Midlatitude Cirrus Models for MODIS Using FIRE-I, FIRE-II, and ARM In Situ Data

    NASA Technical Reports Server (NTRS)

    Nasiri, Shaima L.; Baum, Bryan A.; Heymsfield, Andrew J.; Yang, Ping; Poellot, Michael R.; Kratz, David P.; Hu, Yong-Xiang

    2002-01-01

    Detailed in situ data from cirrus clouds have been collected during dedicated field Campaigns, but the use of the size and habit distribution data has been lagging in the development of more realistic cirrus scattering models. In this study, the authors examine the use of in situ cirrus data collected during three field campaigns to develop more realistic midlatitude cirrus microphysical models. Data are used from the First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE)-I (1986) and FIRE-II (1991) campaigns and from a recent Atmospheric Radiation Measurement (ARM) Program campaign held in March-April of 2000. The microphysical models are based on measured vertical distributions of both particle size and particle habit and are used to develop new scattering models for a suite of moderate-resolution imaging spectroradiometer (MODIS) bands spanning visible. near-infrared, and infrared wavelengths. The sensitivity of the resulting scattering properties to the underlying assumptions of the assumed particle size and habit distributions are examined. It is found that the near-infrared bands are sensitive not only to the discretization of the size distribution but also to the assumed habit distribution. In addition. the results indicate that the effective diameter calculated from a given size distribution tends to be sensitive to the number of size bins that are used to discretize the data and also to the ice-crystal habit distribution.

  9. Correlating capacity and Li content in layered material for Li-ion battery using XRD and particle size distribution measurements

    NASA Astrophysics Data System (ADS)

    Al-Tabbakh, A. A. A.; Al-Zubaidi, A. B.; Kamarulzaman, N.

    2016-03-01

    A lithiated transition-metal oxide material was successfully synthesized by a combustion method for Li-ion battery. The material was characterized using thermogravimetric and particle size analyzers, scanning electron microscope and X-ray diffractometer. The calcined powders of the material exhibited a finite size distribution and a single phase of pure layered structure of space group Roverline{3} m . An innovative method was developed to calculate the material electrochemical capacity based on considerations of the crystal structure and contributions of Li ions from specified unit cells at the surfaces and in the interiors of the material particles. Results suggested that most of the Li ions contributing to the electrochemical current originated from the surface region of the material particles. It was possible to estimate the thickness of the most delithiated region near the particle surfaces at any delithiation depth accurately. Furthermore, results suggested that the core region of the particles remained electrochemically inaccessible in the conventional applied voltages. This result was justified by direct quantitative comparison of specific capacity values calculated from the particle size distribution with those measured experimentally. The present analysis is believed to be of some value for estimation of the failure mechanism in cathode compounds, thus assisting the development of Li-ion batteries.

  10. Three-dimensional radiochromic film dosimetry for volumetric modulated arc therapy using a spiral water phantom.

    PubMed

    Tanooka, Masao; Doi, Hiroshi; Miura, Hideharu; Inoue, Hiroyuki; Niwa, Yasue; Takada, Yasuhiro; Fujiwara, Masayuki; Sakai, Toshiyuki; Sakamoto, Kiyoshi; Kamikonya, Norihiko; Hirota, Shozo

    2013-11-01

    We validated 3D radiochromic film dosimetry for volumetric modulated arc therapy (VMAT) using a newly developed spiral water phantom. The phantom consists of a main body and an insert box, each of which has an acrylic wall thickness of 3 mm and is filled with water. The insert box includes a spiral film box used for dose-distribution measurement, and a film holder for positioning a radiochromic film. The film holder has two parallel walls whose facing inner surfaces are equipped with spiral grooves in a mirrored configuration. The film is inserted into the spiral grooves by its side edges and runs along them to be positioned on a spiral plane. Dose calculation was performed by applying clinical VMAT plans to the spiral water phantom using a commercial Monte Carlo-based treatment-planning system, Monaco, whereas dose was measured by delivering the VMAT beams to the phantom. The calculated dose distributions were resampled on the spiral plane, and the dose distributions recorded on the film were scanned. Comparisons between the calculated and measured dose distributions yielded an average gamma-index pass rate of 87.0% (range, 91.2-84.6%) in nine prostate VMAT plans under 3 mm/3% criteria with a dose-calculation grid size of 2 mm. The pass rates were increased beyond 90% (average, 91.1%; range, 90.1-92.0%) when the dose-calculation grid size was decreased to 1 mm. We have confirmed that 3D radiochromic film dosimetry using the spiral water phantom is a simple and cost-effective approach to VMAT dose verification.

  11. On the use of an analytic source model for dose calculations in precision image-guided small animal radiotherapy.

    PubMed

    Granton, Patrick V; Verhaegen, Frank

    2013-05-21

    Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1 mm with respect to the central axis.

  12. Size exclusion deep bed filtration: Experimental and modelling uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser

    A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less

  13. Mass-specific scattering coefficient for natural minerogenic particle populations: particle size distribution effect and closure analyses.

    PubMed

    Peng, Feng; Effler, Steve W

    2012-05-01

    The relationship between the particulate scattering coefficient (b(p)) and the concentration of suspended particulate matter (SPM), as represented by the mass-specific scattering coefficient of particulates (b(p)*=b(p)/SPM), depends on particle size distribution (PSD). This dependence is quantified for minerogenic particle populations in this paper through calculations of b(p)* for common minerals as idealized populations (monodispersed spheres); contemporaneous measurements of b(p), SPM, and light-scattering attributes of mineral particles with scanning electron microscopy interfaced with automated image and x-ray analyses (SAX), for a connected stream-reservoir system where minerogenic particles dominate b(p); and estimates of b(p) and its size dependency (through SAX results-driven Mie theory calculations), particle volume concentration, and b(p)*. Modest changes in minerogenic PSDs are shown to result in substantial variations in b(p)*. Good closure of the SAX-based estimates of b(p) and particle volume concentration with bulk measurements is demonstrated. Converging relationships between b(p)* and particle size, developed from three approaches, were well described by power law expressions.

  14. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    NASA Astrophysics Data System (ADS)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  15. Effects of composition of grains of debris flow on its impact force

    NASA Astrophysics Data System (ADS)

    Tang, jinbo; Hu, Kaiheng; Cui, Peng

    2017-04-01

    Debris flows compose of solid material with broad size distribution from fine sand to boulders. Impact force imposed by debris flows is a very important issue for protection engineering design and strongly influenced by their grain composition. However, this issue has not been studied in depth and the effects of grain composition not been considered in the calculation of the impact force. In this present study, the small-scale flume experiments with five kinds of compositions of grains for debris flow were carried out to study the effect of the composition of grains of debris flow on its impact force. The results show that the impact force of debris flow increases with the grain size, the hydrodynamic pressure of debris flow is calibrated based on the normalization parameter dmax/d50, in which dmax is the maximum size and d50 is the median size. Furthermore, a log-logistic statistic distribution could be used to describe the distribution of magnitude of impact force of debris flow, where the mean and the variance of the present distribution increase with grain size. This distribution proposed in the present study could be used to the reliability analysis of structures impacted by debris flow.

  16. Cell Size Regulation in Bacteria

    NASA Astrophysics Data System (ADS)

    Amir, Ariel

    2014-05-01

    Various bacteria such as the canonical gram negative Escherichia coli or the well-studied gram positive Bacillus subtilis divide symmetrically after they approximately double their volume. Their size at division is not constant, but is typically distributed over a narrow range. Here, we propose an analytically tractable model for cell size control, and calculate the cell size and interdivision time distributions, as well as the correlations between these variables. We suggest ways of extracting the model parameters from experimental data, and show that existing data for E. coli supports partial size control, and a particular explanation: a cell attempts to add a constant volume from the time of initiation of DNA replication to the next initiation event. This hypothesis accounts for the experimentally observed correlations between mother and daughter cells as well as the exponential dependence of size on growth rate.

  17. Novel fluorescence adjustable photonic crystal materials

    NASA Astrophysics Data System (ADS)

    Zhu, Cheng; Liu, Xiaoxia; Ni, Yaru; Fang, Jiaojiao; Fang, Liang; Lu, Chunhua; Xu, Zhongzi

    2017-11-01

    Novel photonic crystal materials (PCMs) with adjustable fluorescence were fabricated by distributing organic fluorescent powders of Yb0.2Er0.4Tm0.4(TTA)3Phen into the opal structures of self-assembled silica photonic crystals (PCs). Via removing the silica solution in a constant speed, PCs with controllable thicknesses and different periodic sizes were obtained on glass slides. Yb0.2Er0.4Tm0.4(TTA)3Phen powders were subsequently distributed into the opal structures. The structures and optical properties of the prepared PCMs were investigated. Finite-difference-time-domain (FDTD) calculation was used to further analyze the electric field distributions in PCs with different periodic sizes while the relation between periodic sizes and fluorescent spectra of PCMs was discussed. The results showed that the emission color of the PCMs under irradiation of 980 nm laser can be easily adjusted from green to blue by increasing the periodic size from 250 to 450 nm.

  18. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  19. Stationary Size Distributions of Growing Cells with Binary and Multiple Cell Division

    NASA Astrophysics Data System (ADS)

    Rading, M. M.; Engel, T. A.; Lipowsky, R.; Valleriani, A.

    2011-10-01

    Populations of unicellular organisms that grow under constant environmental conditions are considered theoretically. The size distribution of these cells is calculated analytically, both for the usual process of binary division, in which one mother cell produces always two daughter cells, and for the more complex process of multiple division, in which one mother cell can produce 2 n daughter cells with n=1,2,3,… . The latter mode of division is inspired by the unicellular algae Chlamydomonas reinhardtii. The uniform response of the whole population to different environmental conditions is encoded in the individual rates of growth and division of the cells. The analytical treatment of the problem is based on size-dependent rules for cell growth and stochastic transition processes for cell division. The comparison between binary and multiple division shows that these different division processes lead to qualitatively different results for the size distribution and the population growth rates.

  20. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  1. Precipitation growth in convective clouds. [hail

    NASA Technical Reports Server (NTRS)

    Srivastava, R. C.

    1981-01-01

    Analytical solutions to the equations of both the growth and motion of hailstones in updrafts and of cloud water contents which vary linearly with height were used to investigate hail growth in a model cloud. A strong correlation was found between the hail embyro starting position and its trajectory and final size. A simple model of the evolution of particle size distribution by coalescence and spontaneous and binary disintegrations was formulated. Solutions for the mean mass of the distribution and the equilibrium size distribution were obtained for the case of constant collection kernel and disintegration parameters. Azimuthal scans of Doppler velocity at a number of elevation angles were used to calculate high resolution vertical profiles of particle speed and horizontal divergence (the vertical air velocity) in a region of widespread precipitation trailing a mid-latitude squall line.

  2. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  3. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  4. Hydrogen-bonded ring closing and opening of protonated methanol clusters H(+)(CH3OH)(n) (n = 4-8) with the inert gas tagging.

    PubMed

    Li, Ying-Cheng; Hamashima, Toru; Yamazaki, Ryoko; Kobayashi, Tomohiro; Suzuki, Yuta; Mizuse, Kenta; Fujii, Asuka; Kuo, Jer-Lai

    2015-09-14

    The preferential hydrogen bond (H-bond) structures of protonated methanol clusters, H(+)(MeOH)n, in the size range of n = 4-8, were studied by size-selective infrared (IR) spectroscopy in conjunction with density functional theory calculations. The IR spectra of bare clusters were compared with those with the inert gas tagging by Ar, Ne, and N2, and remarkable changes in the isomer distribution with the tagging were found for clusters with n≥ 5. The temperature dependence of the isomer distribution of the clusters was calculated by the quantum harmonic superposition approach. The observed spectral changes with the tagging were well interpreted by the fall of the cluster temperature with the tagging, which causes the transfer of the isomer distribution from the open and flexible H-bond network types to the closed and rigid ones. Anomalous isomer distribution with the tagging, which has been recently found for protonated water clusters, was also found for H(+)(MeOH)5. The origin of the anomaly was examined by the experiments on its carrier gas dependence.

  5. The effectiveness of a new algorithm on a three-dimensional finite element model construction of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Teixeira, E R; Tsuga, K; Shindoi, N

    1999-08-01

    More validity of finite element analysis (FEA) in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To evaluate the effectiveness of a new algorithm established for more valid FEA model construction without downsizing, three-dimensional FEA bone trabeculae models with different element sizes (300, 150 and 75 micron) were constructed. Four algorithms of stepwise (1 to 4 ranks) assignment of Young's modulus accorded with bone volume in the individual cubic element was used and then stress distribution against vertical loading was analysed. The model with 300 micron element size, with 4 ranks of Young's moduli accorded with bone volume in each element presented similar stress distribution to the model with the 75 micron element size. These results show that the new algorithm was effective, and the use of the 300 micron element for bone trabeculae representation was proposed, without critical changes in stress values and for possible savings on computer memory and calculation time in the laboratory.

  6. Efficient Computation of Coherent Synchrotron Radiation Taking into Account 6D Phase Space Distribution of Emitting Electrons

    NASA Astrophysics Data System (ADS)

    Chubar, O.; Couprie, M.-E.

    2007-01-01

    CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength — this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use.

  7. Boundary Layer Aerosol Composition over Sierra Nevada Mountains using 9.11- and 10.59-micron CW Lidars and Modeled Backscatter from Size Distribution Data

    NASA Technical Reports Server (NTRS)

    Cutten, D. R.; Jarzembski, M. A.; Srivastava, V.; Pueschel, R. F.; Howard, S. D.; McCaul, E. W., Jr.

    2003-01-01

    An inversion technique has been developed to determine volume fractions of an atmospheric aerosol composed primarily of ammonium sulfate and ammonium nitrate and water combined with fixed concentration of elemental and organic carbon. It is based on measured aerosol backscatter obtained with 9.11 - and 10.59-micron wavelength continuous wave CO2 lidars and modeled backscatter from aerosol size distribution data. The technique is demonstrated during a flight of the NASA DC-8 aircraft over the Sierra Nevada Mountain Range, California on 19 September, 1995. Volume fraction of each component and effective complex refractive index of the composite particle were determined assuming an internally mixed composite aerosol model. The volume fractions were also used to re-compute aerosol backscatter, providing good agreement with the lidar-measured data. The robustness of the technique for determining volume fractions was extended with a comparison of calculated 2.1,-micron backscatter from size distribution data with the measured lidar data converted to 2.1,-micron backscatter using an earlier derived algorithm, verifying the algorithm as well as the backscatter calculations.

  8. Shipboard Sunphotometer Measurements of Aerosol Optical Depth Spectra and Columnar Water Vapor During ACE-2, and Comparison with Selected Land, Ship, Aircraft, and Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Livingston, John M.; Kapustin, Vladimir N.; Schmid, Beat; Russell, Philip B.; Quinn, Patricia K.; Bates, Timothy S.; Durkee, Philip A.; Smith, Peter J.; Freudenthaler, Volker; Wiegner, Matthias

    2000-01-01

    Analyses of aerosol optical depth (AOD) and columnar water vapor (CWV) measurements acquired with NASA Ames Research Center's six-channel Airborne Tracking Sunphotometer (AATS-6) operated aboard the R/V (research vehicle) Professor Vodyanitskiy during the second Aerosol Characterization Experiment (ACE-2) are discussed. Data are compared with various in situ and remote measurements for selected cases. The focus is on 10 July, when the Pelican airplane flew within 70 km of the ship near the time of a NOAA (National Oceanographic and Atmospheric Administration)-14/AVHRR (Advanced Very High Resolution Radiometer) satellite overpass and AOD measurements with the 14-channel Ames Airborne Tracking Sunphotometer (AATS-14) above the marine boundary layer (MBL) permitted calculation of AOD within the MBL from the AATS-6 measurements. A detailed column closure test is performed for MBL AOD on 10 July by comparing the AATS-6 MBL AODs with corresponding values calculated by combining shipboard particle size distribution measurements with models of hygroscopic growth and radiosonde humidity profiles (plus assumptions on the vertical profile of the dry particle size distribution and composition). Large differences (30-80% in the mid-visible) between measured and reconstructed AODs are obtained, in large part because of the high sensitivity of the closure methodology to hygroscopic growth models, which vary considerably and have not been validated over the necessary range of particle size/composition distributions. The wavelength dependence of AATS-6 AODs is compared with the corresponding dependence of aerosol extinction calculated from shipboard measurements of aerosol size distribution and of total scattering measured by a shipboard integrating nephelometer for several days. Results are highly variable, illustrating further the great difficulty of deriving column values from point measurements. AATS-6 CWV values are shown to agree well with corresponding values derived from radiosonde measurements during eight soundings on seven days and also with values calculated from measurements taken on 10 July with the AATS-14 and the University of Washington Passive Humidigraph aboard the Pelican.

  9. Evaluating the importance of grain size sensitive creep in terrestrial ice sheet rheology

    NASA Astrophysics Data System (ADS)

    Maaijwee, C. N. P. J.; de Bresser, J. H. P.

    2009-04-01

    The rheology of ice in terrestrial ice sheets is generally considered to be independent of the size of the grains (crystals), and appears well described by Glen's flow law. In recent years, however, new laboratory deformation experiments on ice as well as analysis of in situ measurements of deformation at glaciers suggested that grain size and variations therein should not be discarded as important parameters in the deformation of ice in nature. Ice, just like crystalline rock materials, exhibits distributed grain sizes. Taking now that not only grain size insensitive (GSI; dislocation) mechanisms, but also grain size sensitive (GSS; diffusion and/or grain boundary sliding) mechanisms may be operative in ice, variations in the shape of the distribution (e.g. the width) can be expected to affect the rheological behaviour. To evaluate this effect, we have derived a composite GSI+GSS flow law and combined this with full grain size distributions. The constitutive flow equations for end-member GSI and GSS creep of ice were taken from the work of Goldsby and Kohlstedt (2001, J.Geophys.Res., vol. 106). We used their description of grain boundary sliding controlled creep as representative of GSS creep. The grain size data largely came from published measurements from the top 800-1000 m of two Greenland ice cores (NorthGRIP and GRIP) and one Antarctic ice core (Epica, Dome Concordia). Temperature profiles were available for both core settings. The grain size data show a close to lognormal distribution in all three settings, with the median grain size increasing with depth. We constructed a synthetic grain size profile up to a depth of 3100 m (cf. GRIP) by allowing the median grain size and standard deviation of the distribution to linearly increase with depth. The percentage GSS creep contributing to the total strain rate has been calculated for a range of strain rates that were assumed constant along the ice core axes. The results of our calculations show that at realistic strain rates in the order of 10-11 to 10-12 s-1, GSS mechanisms can be expected to dominate creep in the parts of the ice sheets investigated (i.e. the top ~1000 m). In the synthetic core, the GSS contribution decreases if going to greater depth (~2500 m), but increases again close to the contact with the bedrock (at 3100 m). Although many assumptions have been made in our approach, the results confirm the important role that grain size might play in ice sheet rheology. The application of full grain size distributions in composite flow equations helps to come to reliable extrapolation of lab data to nature.

  10. Optical and physical properties of stratospheric aerosols from balloon measurements in the visible and near-infrared domains. I. Analysis of aerosol extinction spectra from the AMON and SALOMON balloonborne spectrometers

    NASA Astrophysics Data System (ADS)

    Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel

    2002-12-01

    Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.

  11. A distribution model for the aerial application of granular agricultural particles

    NASA Technical Reports Server (NTRS)

    Fernandes, S. T.; Ormsbee, A. I.

    1978-01-01

    A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.

  12. Evaluation of material heterogeneity dosimetric effects using radiochromic film for COMS eye plaques loaded with {sup 125}I seeds (model I25.S16)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acar, Hilal; Chiu-Tsao, Sou-Tung; Oezbay, Ismail

    Purpose: (1) To measure absolute dose distributions in eye phantom for COMS eye plaques with {sup 125}I seeds (model I25.S16) using radiochromic EBT film dosimetry. (2) To determine the dose correction function for calculations involving the TG-43 formalism to account for the presence of the COMS eye plaque using Monte Carlo (MC) method specific to this seed model. (3) To test the heterogeneous dose calculation accuracy of the new version of Plaque Simulator (v5.3.9) against the EBT film data for this seed model. Methods: Using EBT film, absolute doses were measured for {sup 125}I seeds (model I25.S16) in COMS eyemore » plaques (1) along the plaque's central axis for (a) uniformly loaded plaques (14-20 mm in diameter) and (b) a 20 mm plaque with single seed, and (2) in off-axis direction at depths of 5 and 12 mm for all four plaque sizes. The EBT film calibration was performed at {sup 125}I photon energy. MC calculations using MCNP5 code for a single seed at the center of a 20 mm plaque in homogeneous water and polystyrene medium were performed. The heterogeneity dose correction function was determined from the MC calculations. These function values at various depths were entered into PS software (v5.3.9) to calculate the heterogeneous dose distributions for the uniformly loaded plaques (of all four sizes). The dose distributions with homogeneous water assumptions were also calculated using PS for comparison. The EBT film measured absolute dose rate values (film) were compared with those calculated using PS with homogeneous assumption (PS Homo) and heterogeneity correction (PS Hetero). The values of dose ratio (film/PS Homo) and (film/PS Hetero) were obtained. Results: The central axis depth dose rate values for a single seed in 20 mm plaque measured using EBT film and calculated with MCNP5 code (both in ploystyrene phantom) were compared, and agreement within 9% was found. The dose ratio (film/PS Homo) values were substantially lower than unity (mostly between 0.8 and 0.9) for all four plaque sizes, indicating dose reduction by COMS plaque compared with homogeneous assumption. The dose ratio (film/PS Hetero) values were close to unity, indicating the PS Hetero calculations agree with those from the film study. Conclusions: Substantial heterogeneity effect on the {sup 125}I dose distributions in an eye phantom for COMS plaques was verified using radiochromic EBT film dosimetry. The calculated doses for uniformly loaded plaques using PS with heterogeneity correction option enabled were corroborated by the EBT film measurement data. Radiochromic EBT film dosimetry is feasible in measuring absolute dose distributions in eye phantom for COMS eye plaques loaded with single or multiple {sup 125}I seeds. Plaque Simulator is a viable tool for the calculation of dose distributions if one understands its limitations and uses the proper heterogeneity correction feature.« less

  13. Image analysis for the automated estimation of clonal growth and its application to the growth of smooth muscle cells.

    PubMed

    Gavino, V C; Milo, G E; Cornwell, D G

    1982-03-01

    Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.

  14. Analysis of surgical smoke produced by various energy-based instruments and effect on laparoscopic visibility.

    PubMed

    Weld, Kyle J; Dryer, Stephen; Ames, Caroline D; Cho, Kuk; Hogan, Chris; Lee, Myonghwa; Biswas, Pratim; Landman, Jaime

    2007-03-01

    We analyzed the smoke plume produced by various energy-based laparoscopic instruments and determined its effect on laparoscopic visibility. The Bipolar Macroforceps, Harmonic Scalpel, Floating Ball, and Monopolar Shears were applied in vitro to porcine psoas muscle. An Aerodynamic Particle Sizer and Electrostatic Classifier provided a size distribution of the plume for particles >500 nm and <500 nm, and a geometric mean particle size was calculated. A Condensation Particle Counter provided the total particle-number concentration. Electron microscopy was used to characterize particle size and shape further. Visibility was calculated using the measured-size distribution data and the Rayleigh and Mie light-scattering theories. The real-time instruments were successful in measuring aerosolized particle size distributions in two size ranges. Electron microscopy revealed smaller, homogeneous, spherical particles and larger, irregular particles consistent with cellular components. The aerosol produced by the Bipolar Macroforceps obscured visibility the least (relative visibility 0.887) among the instruments tested. Particles from the Harmonic Scalpel resulted in a relative visibility of 0.801. Monopolar-based instruments produced plumes responsible for the poorest relative visibility (Floating Ball 0.252; Monopolar Shears 0.026). Surgical smoke is composed of two distinct particle populations caused by the nucleation of vapors as they cool (the small particles) and the entrainment of tissue secondary to mechanical aspects (the large particles). High concentrations of small particles are most responsible for the deterioration in laparoscopic vision. Bipolar and ultrasonic instruments generate a surgical plume that causes the least deterioration of visibility among the instruments tested.

  15. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  16. In situ measurements of plasma properties during gas-condensation of Cu nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koten, M. A., E-mail: mark.koten@gmail.com; Shield, J. E.; Voeller, S. A.

    2016-03-21

    Since the mean, standard deviation, and modality of nanoparticle size distributions can vary greatly between similar input conditions (e.g., power and gas flow rate), plasma diagnostics were carried out in situ using a double-sided, planar Langmuir probe to determine the effect the plasma has on the heating of clusters and their final size distributions. The formation of Cu nanoparticles was analyzed using cluster-plasma physics, which relates the processes of condensation and evaporation to internal plasma properties (e.g., electron temperature and density). Monitoring these plasma properties while depositing Cu nanoparticles with different size distributions revealed a negative correlation between average particlemore » size and electron temperature. Furthermore, the modality of the size distributions also correlated with the modality of the electron energy distributions. It was found that the maximum cluster temperature reached during plasma heating and the material's evaporation point regulates the growth process inside the plasma. In the case of Cu, size distributions with average sizes of 8.2, 17.3, and 24.9 nm in diameter were monitored with the Langmuir probe, and from the measurements made, the cluster temperatures for each deposition were calculated to be 1028, 1009, and 863 K. These values are then compared with the onset evaporation temperature of particles of this size, which was estimated to be 1059, 1068, and 1071 K. Thus, when the cluster temperature is too close to the evaporation temperature, less particle growth occurs, resulting in the formation of smaller particles.« less

  17. Three-dimensional radiochromic film dosimetry for volumetric modulated arc therapy using a spiral water phantom

    PubMed Central

    Tanooka, Masao; Doi, Hiroshi; Miura, Hideharu; Inoue, Hiroyuki; Niwa, Yasue; Takada, Yasuhiro; Fujiwara, Masayuki; Sakai, Toshiyuki; Sakamoto, Kiyoshi; Kamikonya, Norihiko; Hirota, Shozo

    2013-01-01

    We validated 3D radiochromic film dosimetry for volumetric modulated arc therapy (VMAT) using a newly developed spiral water phantom. The phantom consists of a main body and an insert box, each of which has an acrylic wall thickness of 3 mm and is filled with water. The insert box includes a spiral film box used for dose-distribution measurement, and a film holder for positioning a radiochromic film. The film holder has two parallel walls whose facing inner surfaces are equipped with spiral grooves in a mirrored configuration. The film is inserted into the spiral grooves by its side edges and runs along them to be positioned on a spiral plane. Dose calculation was performed by applying clinical VMAT plans to the spiral water phantom using a commercial Monte Carlo-based treatment-planning system, Monaco, whereas dose was measured by delivering the VMAT beams to the phantom. The calculated dose distributions were resampled on the spiral plane, and the dose distributions recorded on the film were scanned. Comparisons between the calculated and measured dose distributions yielded an average gamma-index pass rate of 87.0% (range, 91.2–84.6%) in nine prostate VMAT plans under 3 mm/3% criteria with a dose-calculation grid size of 2 mm. The pass rates were increased beyond 90% (average, 91.1%; range, 90.1–92.0%) when the dose-calculation grid size was decreased to 1 mm. We have confirmed that 3D radiochromic film dosimetry using the spiral water phantom is a simple and cost-effective approach to VMAT dose verification. PMID:23685667

  18. Dust distributions in debris disks: effects of gravity, radiation pressure and collisions

    NASA Astrophysics Data System (ADS)

    Krivov, A. V.; Löhne, T.; Sremčević, M.

    2006-08-01

    We model a typical debris disk, treated as an idealized ensemble of dust particles, exposed to stellar gravity and direct radiation pressure and experiencing fragmenting collisions. Applying the kinetic method of statistical physics, written in orbital elements, we calculate size and spatial distibutions expected in a steady-state disk, investigate timescales needed to reach the steady state, and calculate mass loss rates. Particular numerical examples are given for the debris disk around Vega. The disk should comprise a population of larger grains in bound orbits and a population of smaller particles in hyperbolic orbits. The cross section area is dominated by the smallest grains that still can stay in bound orbits, for Vega about 10 {μ m} in radius. The size distribution is wavy, implying secondary peaks in the size distribution at larger sizes. The radial profile of the pole-on surface density or the optical depth in the steady-state disk has a power-law index between about -1 and -2. It cannot be much steeper even if dust production is confined to a narrow planetesimal belt, because collisional grinding produces smaller and smaller grains, and radiation pressure pumps up their orbital eccentricities and spreads them outward, which flattens the radial profile. The timescales to reach a steady state depend on grain sizes and distance from the star. For Vega, they are about 1 Myr for grains up to some hundred {μ m} at 100 AU. The total mass of the Vega disk needed to produce the observed amount of micron and submillimeter-sized dust does not exceed several earth masses for an upper size limit of parent bodies of about 1 km. The collisional depletion of the disk occurs on Gyr timescales.

  19. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  20. Spatial Variability of CCN Sized Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Väänänen, R.

    2014-12-01

    The computational limitations restrict the grid size used in GCM models, and for many cloud types they are too large when compared to the scale of the cloud formation processes. Several parameterizations for e.g. convective cloud formation exist, but information on spatial subgrid variation of the cloud condensation nuclei (CCNs) sized aerosol concentration is not known. We quantify this variation as a function of the spatial scale by using datasets from airborne aerosol measurement campaigns around the world including EUCAARI LONGREX, ATAR, INCA, INDOEX, CLAIRE, PEGASOS and several regional airborne campaigns in Finland. The typical shapes of the distributions are analyzed. When possible, we use information obtained by CCN counters. In some other cases, we use particle size distribution measured by for example SMPS to get approximated CCN concentration. Other instruments used include optical particle counters or condensational particle counters. When using the GCM models, the CCN concentration used for each the grid-box is often considered to be either flat, or as an arithmetic mean of the concentration inside the grid-box. However, the aircraft data shows that the concentration values are often lognormal distributed. This, combined with the subgrid variations in the land use and atmospheric properties, might cause that the aerosol-cloud interactions calculated by using mean values to vary significantly from the true effects both temporary and spatially. This, in turn, can cause non-linear bias into the GCMs. We calculate the CCN aerosol concentration distribution as a function of different spatial scales. The measurements allow us to study the variation of these distributions within from hundreds of meters up to hundreds of kilometers. This is used to quantify the potential error when mean values are used in GCMs.

  1. Spatial distribution and yield of DNA double-strand breaks induced by 3-7 MeV helium ions in human fibroblasts

    NASA Technical Reports Server (NTRS)

    Rydberg, Bjorn; Heilbronn, Lawrence; Holley, William R.; Lobrich, Markus; Zeitlin, Cary; Chatterjee, Aloke; Cooper, Priscilla K.

    2002-01-01

    Accelerated helium ions with mean energies at the target location of 3-7 MeV were used to simulate alpha-particle radiation from radon daughters. The experimental setup and calibration procedure allowed determination of the helium-ion energy distribution and dose in the nuclei of irradiated cells. Using this system, the induction of DNA double-strand breaks and their spatial distributions along DNA were studied in irradiated human fibroblasts. It was found that the apparent number of double-strand breaks as measured by a standard pulsed-field gel assay (FAR assay) decreased with increasing LET in the range 67-120 keV/microm (corresponding to the energy of 7-3 MeV). On the other hand, the generation of small and intermediate-size DNA fragments (0.1-100 kbp) increased with LET, indicating an increased intratrack long-range clustering of breaks. The fragment size distribution was measured in several size classes down to the smallest class of 0.1-2 kbp. When the clustering was taken into account, the actual number of DNA double-strand breaks (separated by at least 0.1 kbp) could be calculated and was found to be in the range 0.010-0.012 breaks/Mbp Gy(-1). This is two- to threefold higher than the apparent yield obtained by the FAR assay. The measured yield of double-strand breaks as a function of LET is compared with theoretical Monte Carlo calculations that simulate the track structure of energy depositions from helium ions as they interact with the 30-nm chromatin fiber. When the calculation is performed to include fragments larger than 0.1 kbp (to correspond to the experimental measurements), there is good agreement between experiment and theory.

  2. Dose computation for therapeutic electron beams

    NASA Astrophysics Data System (ADS)

    Glegg, Martin Mackenzie

    The accuracy of electron dose calculations performed by two commercially available treatment planning computers, Varian Cadplan and Helax TMS, has been assessed. Measured values of absorbed dose delivered by a Varian 2100C linear accelerator, under a wide variety of irradiation conditions, were compared with doses calculated by the treatment planning computers. Much of the motivation for this work was provided by a requirement to verify the accuracy of calculated electron dose distributions in situations encountered clinically at Glasgow's Beatson Oncology Centre. Calculated dose distributions are required in a significant minority of electron treatments, usually in cases involving treatment to the head and neck. Here, therapeutic electron beams are subject to factors which may cause non-uniformity in the distribution of dose, and which may complicate the calculation of dose. The beam shape is often irregular, the beam may enter the patient at an oblique angle or at an extended source to skin distance (SSD), tissue inhomogeneities can alter the dose distribution, and tissue equivalent material (such as wax) may be added to reduce dose to critical organs. Technological advances have allowed the current generation of treatment planning computers to implement dose calculation algorithms with the ability to model electron beams in these complex situations. These calculations have, however, yet to be verified by measurement. This work has assessed the accuracy of calculations in a number of specific instances. Chapter two contains a comparison of measured and calculated planar electron isodose distributions. Three situations were considered: oblique incidence, incidence on an irregular surface (such as that which would be arise from the use of wax to reduce dose to spinal cord), and incidence on a phantom containing a small air cavity. Calculations were compared with measurements made by thermoluminescent dosimetry (TLD) in a WTe electron solid water phantom. Chapter three assesses the planning computers' ability to model electron beam penumbra at extended SSD. Calculations were compared with diode measurements in a water phantom. Further measurements assessed doses in the junction region produced by abutting an extended SSD electron field with opposed photon fields. Chapter four describes an investigation of the size and shape of the region enclosed by the 90% isodose line when produced by limiting the electron beam with square and elliptical apertures. The 90% isodose line was chosen because clinical treatments are often prescribed such that a given volume receives at least 90% dose. Calculated and measured dose distributions were compared in a plane normal to the beam central axis. Measurements were made by film dosimetry. While chapters two to four examine relative doses, chapter five assesses the accuracy of absolute dose (or output) calculations performed by the planning computers. Output variation with SSD and field size was examined. Two further situations already assessed for the distribution of relative dose were also considered: an obliquely incident field, and a field incident on an irregular surface. The accuracy of calculations was assessed against criteria stipulated by the International Commission on Radiation Units and Measurement (ICRU). The Varian Cadplan and Helax TMS treatment planning systems produce acceptable accuracy in the calculation of relative dose from therapeutic electron beams in most commonly encountered situations. When interpreting clinical dose distributions, however, knowledge of the limitations of the calculation algorithm employed by each system is required in order to identify the minority of situations where results are not accurate. The calculation of absolute dose is too inaccurate to implement in a clinical environment. (Abstract shortened by ProQuest.).

  3. Estimating probable flaw distributions in PWR steam generator tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regardingmore » uncertainties and assumptions in the data and analyses.« less

  4. Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques

    NASA Astrophysics Data System (ADS)

    Hassan, Jamal

    2012-09-01

    The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.

  5. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  6. Inference and sample size calculation for clinical trials with incomplete observations of paired binary outcomes.

    PubMed

    Zhang, Song; Cao, Jing; Ahn, Chul

    2017-02-20

    We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Particle size analysis of some water/oil/water multiple emulsions.

    PubMed

    Ursica, L; Tita, D; Palici, I; Tita, B; Vlaia, V

    2005-04-29

    Particle size analysis gives useful information about the structure and stability of multiple emulsions, which are important characteristics of these systems. It also enables the observation of the growth process of particles dispersed in multiple emulsions, accordingly, the evolution of their dimension in time. The size of multiple particles in the seven water/oil/water (W/O/W) emulsions was determined by measuring the particles size observed during the microscopic examination. In order to describe the distribution of the size of multiple particles, the value of two parameters that define the particle size was calculated: the arithmetical mean diameter and the median diameter. The results of the particle size analysis in the seven multiple emulsions W/O/W studied are presented as histograms of the distribution density immediately, 1 and 3 months after the preparation of each emulsion, as well as by establishing the mean and the median diameter of particles. The comparative study of the distribution histograms and of the mean and median diameters of W/O/W multiple particles indicates that the prepared emulsions are fine and very fine dispersions, stable, and presenting a growth of the abovementioned diameters during the study.

  8. Size distributions of polycyclic aromatic hydrocarbons in urban atmosphere: sorption mechanism and source contributions to respiratory deposition

    NASA Astrophysics Data System (ADS)

    Lv, Yan; Li, Xiang; Xu, Ting Ting; Cheng, Tian Tao; Yang, Xin; Chen, Jian Min; Iinuma, Yoshiteru; Herrmann, Hartmut

    2016-03-01

    In order to better understand the particle size distribution of polycyclic aromatic hydrocarbons (PAHs) and their source contribution to human respiratory system, size-resolved PAHs have been studied in ambient aerosols at a megacity Shanghai site during a 1-year period (2012-2013). The results showed the PAHs had a bimodal distribution with one mode peak in the fine-particle size range (0.4-2.1 µm) and another mode peak in the coarse-particle size range (3.3-9.0 µm). Along with the increase in ring number of PAHs, the intensity of the fine-mode peak increased, while the coarse-mode peak decreased. Plotting of log(PAH / PM) against log(Dp) showed that all slope values were above -1, suggesting that multiple mechanisms (adsorption and absorption) controlled the particle size distribution of PAHs. The total deposition flux of PAHs in the respiratory tract was calculated as being 8.8 ± 2.0 ng h-1. The highest lifetime cancer risk (LCR) was estimated at 1.5 × 10-6, which exceeded the unit risk of 10-6. The LCR values presented here were mainly influenced by accumulation mode PAHs which came from biomass burning (24 %), coal combustion (25 %), and vehicular emission (27 %). The present study provides us with a mechanistic understanding of the particle size distribution of PAHs and their transport in the human respiratory system, which can help develop better source control strategies.

  9. [Five years experience in the treatment of urinary lithiasis with the extracorporal lithotriptor MFL-5000].

    PubMed

    Montesino, M; Santiago, A; Millán, J A; Jiménez, J; Grasa, V; Sebastián, J L; Cía, T

    1998-01-01

    We present a review of the treatments of the urinary calculations carried out in the Lithotrity Unit in its first five years of working with the Dornier lithotriptor MFL-5000. We describe the location and size of the calculations, distribution by age and sex of the patients, energy applied and time employed, and we compare our retreatment index and the number of sessions per calculation with those published by other authors.

  10. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  11. Preparation of gold nanoparticles and determination of their particles size via different methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iqbal, Muhammad; Usanase, Gisele; Oulmi, Kafia

    Graphical abstract: Preparation of gold nanoparticles via NaBH{sub 4} reduction method, and determination of their particle size, size distribution and morphology by using different techniques. - Highlights: • Gold nanoparticles were synthesized by NaBH{sub 4} reduction method. • Excess of reducing agent leads to tendency of aggregation. • The particle size, size distribution and morphology were investigated. • Particle size was determined both experimentally as well as theoretically. - Abstract: Gold nanoparticles have been used in various applications covering both electronics, biosensors, in vivo biomedical imaging and in vitro biomedical diagnosis. As a general requirement, gold nanoparticles should be preparedmore » in large scale, easy to be functionalized by chemical compound of by specific ligands or biomolecules. In this study, gold nanoparticles were prepared by using different concentrations of reducing agent (NaBH{sub 4}) in various formulations and their effect on the particle size, size distribution and morphology was investigated. Moreover, special attention has been dedicated to comparison of particles size measured by various techniques, such as, light scattering, transmission electron microscopy, UV spectrum using standard curve and particles size calculated by using Mie theory and UV spectrum of gold nanoparticles dispersion. Particle size determined by various techniques can be correlated for monodispersed particles and excess of reducing agent leads to increase in the particle size.« less

  12. Aerosol size and chemical composition measurements at the Polar Environment Atmospheric Research Lab (PEARL) in Eureka, Nunavut

    NASA Astrophysics Data System (ADS)

    Hayes, P. L.; Tremblay, S.; Chang, R. Y. W.; Leaitch, R.; Kolonjari, F.; O'Neill, N. T.; Chaubey, J. P.; AboEl Fetouh, Y.; Fogal, P.; Drummond, J. R.

    2016-12-01

    This study presents observations of aerosol chemical composition and particle number size distribution at the Polar Environment Atmospheric Research Laboratory (PEARL) in the Canadian High Arctic (80N, 86W). The current aerosol measurement program at PEARL has been ongoing for more than a year providing long-term observations of Arctic aerosol size distributions for both coarse and fine modes. Particle nucleation events were frequently observed during the summers of 2015 and 2016. The size distribution data are also compared against similar measurements taken at the Alert Global Atmospheric Watch Observatory (82N, 62W) for July and August 2015. The nucleation events are correlated at the two sites, despite a distance of approximately 500 km, suggesting regional conditions favorable for particle nucleation and growth during this period. Size resolved chemical composition measurements were also carried out using an aerosol mass spectrometer. The smallest measured particles between 40 and 60 nm are almost entirely organic aerosol (OA) indicating that the condensation of organic vapors is responsible for particle growth events and possibly particle nucleation. This conclusion is further supported by the relatively high oxygen content of the OA, which is consistent with secondary formation of OA via atmospheric oxidation.Lastly, surface measurements of the aerosol scattering coefficient are compared against the coefficient values calculated using Mie theory and the measured aerosol size distribution. Both the actual and the calculated scattering coefficients are then compared to sun photometer measurements to understand the relationship between surface and columnar aerosol optical properties. The measurements at PEARL provide a unique combination of surface and columnar data sets on aerosols in the High Arctic, a region where such measurements are scarce despite the important impact of aerosols on Arctic climate.PEARL research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canadian Space Agency (CSA), and Environment and Climate Change Canada (ECCC). In addition, the Alert GAW Observatory is supported by ECCC.

  13. Effect of particle size distribution on permeability in the randomly packed porous media

    NASA Astrophysics Data System (ADS)

    Markicevic, Bojan

    2017-11-01

    An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.

  14. Atmospheric particulate matter size distribution and concentration in West Virginia coal mining and non-mining areas.

    PubMed

    Kurth, Laura M; McCawley, Michael; Hendryx, Michael; Lusk, Stephanie

    2014-07-01

    People who live in Appalachian areas where coal mining is prominent have increased health problems compared with people in non-mining areas of Appalachia. Coal mines and related mining activities result in the production of atmospheric particulate matter (PM) that is associated with human health effects. There is a gap in research regarding particle size concentration and distribution to determine respiratory dose around coal mining and non-mining areas. Mass- and number-based size distributions were determined with an Aerodynamic Particle Size and Scanning Mobility Particle Sizer to calculate lung deposition around mining and non-mining areas of West Virginia. Particle number concentrations and deposited lung dose were significantly greater around mining areas compared with non-mining areas, demonstrating elevated risks to humans. The greater dose was correlated with elevated disease rates in the West Virginia mining areas. Number concentrations in the mining areas were comparable to a previously documented urban area where number concentration was associated with respiratory and cardiovascular disease.

  15. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  16. Development of an ejecta particle size measurement diagnostic based on Mie scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauer, Martin Michael; Buttler, William Tillman; Frayer, Daniel K.

    The goal of this work is to determine the feasibility of extracting the size of particles ejected from shocked metal surfaces (ejecta) from the angular distribution of light scattered by a cloud of such particles. The basis of the technique is the Mie theory of scattering, and implicit in this approach are the assumptions that the scattering particles are spherical and that single scattering conditions prevail. The meaning of this latter assumption, as far as experimental conditions are concerned, will become clear later. The solution to Maxwell’s equations for spherical particles illuminated by a plane electromagnetic wave was derived bymore » Gustav Mie more than 100 years ago, but several modern treatises discuss this solution in great detail. The solution is a complicated series expansion of the scattered electric field, as well as the field within the particle, from which the total scattering and absorption cross sections as well as the angular distribution of scattered intensity can be calculated numerically. The detailed nature of the scattering is determined by the complex index of refraction of the particle material as well as the particle size parameter, x, which is the product of the wavenumber of the incident light and the particle radius, i.e. x = 2rπ= λ. Figure 1 shows the angular distribution of scattered light for different particle size parameters and two orthogonal incident light polarizations as calculated using the Mie solution. It is obvious that the scattering pattern is strongly dependent on the particle size parameter, becoming more forward-directed and less polarizationdependent as the particle size parameter increases. This trend forms the basis for the diagnostic design.« less

  17. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    NASA Astrophysics Data System (ADS)

    Joachimiak, Damian; Krzyślak, Piotr

    2015-06-01

    Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  18. Colour dependence of zodiacal light models

    NASA Technical Reports Server (NTRS)

    Giese, R. H.; Hanner, M. S.; Leinert, C.

    1973-01-01

    Colour models of the zodiacal light in the ecliptic have been calculated for both dielectric and metallic particles in the sub-micron and micron size range. Two colour ratios were computed, a blue ratio and a red ratio. The models with a size distribution proportional to s to the -2.5 power ds (where s is the particle radius) generally show a colour close to the solar colour and almost independent of elongation. Especially in the blue colour ratio there is generally no significant dependence on the lower cutoff size (0.1-1 micron). The main feature of absorbing particles is a reddening at small elongations. The models for size distributions proportional to s to the -4 power ds show larger departures from solar colour and more variation with model parameters. Colour measurements, including red and near infra-red, therefore are useful to distinguish between flat and steep size spectra and to verify the presence of slightly absorbing particles.

  19. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  20. Correlation functions in first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Garrido, V.; Crespo, D.

    1997-09-01

    Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.

  1. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  2. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  3. Field-size dependence of doses of therapeutic carbon beams.

    PubMed

    Kusano, Yohsuke; Kanai, Tatsuaki; Yonai, Shunsuke; Komori, Masataka; Ikeda, Noritoshi; Tachikawa, Yuji; Ito, Atsushi; Uchida, Hirohisa

    2007-10-01

    To estimate the physical dose at the center of spread-out Bragg peaks (SOBP) for various conditions of the irradiation system, a semiempirical approach was applied. The dose at the center of the SOBP depends on the field size because of large-angle scattering particles in the water phantom. For a small field of 5 x 5 cm2, the dose was reduced to 99.2%, 97.5%, and 96.5% of the dose used for the open field in the case of 290, 350, and 400 MeV/n carbon beams, respectively. Based on the three-Gaussian form of the lateral dose distributions of the carbon pencil beam, which has previously been shown to be effective for describing scattered carbon beams, we reconstructed the dose distributions of the SOBP beam. The reconstructed lateral dose distribution reproduced the measured lateral dose distributions very well. The field-size dependencies calculated using the reconstructed lateral dose distribution of the therapeutic carbon beam agreed with the measured dose dependency very well. The reconstructed beam was also used for irregularly shaped fields. The resultant dose distribution agreed with the measured dose distribution. The reconstructed beams were found to be applicable to the treatment-planning system.

  4. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  5. Ionic Size Effects: Generalized Boltzmann Distributions, Counterion Stratification, and Modified Debye Length.

    PubMed

    Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao

    2013-10-01

    Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes.

  6. Ionic Size Effects: Generalized Boltzmann Distributions, Counterion Stratification, and Modified Debye Length

    PubMed Central

    Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao

    2013-01-01

    Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes. PMID:24465094

  7. Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.

    PubMed

    Boulle, A; Conchon, F; Guinebretière, R

    2006-01-01

    A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.

  8. Classification of spray nozzles based on droplet size distributions and wind tunnel tests.

    PubMed

    De Schamphelerie, M; Spanoghe, P; Nuyttens, D; Baetens, K; Cornelis, W; Gabriels, D; Van der Meeren, P

    2006-01-01

    Droplet size distribution of a pesticide spray is recognised as a main factor affecting spray drift. As a first approximation, nozzles can be classified based on their droplet size spectrum. However, the risk of drift for a given droplet size distribution is also a function of spray structure, droplet velocities and entrained air conditions. Wind tunnel tests to determine actual drift potentials of the different nozzles have been proposed as a method of adding an indication of the risk of spray drift to the existing classification based on droplet size distributions (Miller et al, 1995). In this research wind tunnel tests were performed in the wind tunnel of the International Centre for Eremology (I.C.E.), Ghent University, to determine the drift potential of different types and sizes of nozzles at various spray pressures. Flat Fan (F) nozzles Hardi ISO 110 02, 110 03, 110 04, 110 06; Low-Drift (LD) nozzles Hardi ISO 110 02, 110 03, 110 04 and Injet Air Inclusion (AI) nozzles Hardi ISO 110 02, 110 03, 110 04 were tested at a spray pressures of 2, 3 and 4 bar. The droplet size spectra of the F and the LD nozzles were measured with a Malvern Mastersizer at spray pressures 2 bar, 3 bar and 4 bar. The Malvern spectra were used to calculate the Volume Median Diameters (VMD) of the sprays.

  9. Pore water colloid properties in argillaceous sedimentary rocks.

    PubMed

    Degueldre, Claude; Cloet, Veerle

    2016-11-01

    The focus of this work is to evaluate the colloid nature, concentration and size distribution in the pore water of Opalinus Clay and other sedimentary host rocks identified for a potential radioactive waste repository in Switzerland. Because colloids could not be measured in representative undisturbed porewater of these host rocks, predictive modelling based on data from field and laboratory studies is applied. This approach allowed estimating the nature, concentration and size distributions of the colloids in the pore water of these host rocks. As a result of field campaigns, groundwater colloid concentrations are investigated on the basis of their size distribution quantified experimentally using single particle counting techniques. The colloid properties are estimated considering data gained from analogue hydrogeochemical systems ranging from mylonite features in crystalline fissures to sedimentary formations. The colloid concentrations were analysed as a function of the alkaline and alkaline earth element concentrations. Laboratory batch results on clay colloid generation from compacted pellets in quasi-stagnant water are also reported. Experiments with colloids in batch containers indicate that the size distribution of a colloidal suspension evolves toward a common particle size distribution independently of initial conditions. The final suspension size distribution was found to be a function of the attachment factor of the colloids. Finally, calculations were performed using a novel colloid distribution model based on colloid generation, aggregation and sedimentation rates to predict under in-situ conditions what makes colloid concentrations and size distributions batch- or fracture-size dependent. The data presented so far are compared with the field and laboratory data. The colloid occurrence, stability and mobility have been evaluated for the water of the considered potential host rocks. In the pore water of the considered sedimentary host rocks, the clay colloid concentration is expected to be very low (<1ppb, for 10-100nm) which restricts their relevance for radionuclide transport. Copyright © 2016. Published by Elsevier B.V.

  10. Morphologically controlled synthesis of ferric oxide nano/micro particles and their catalytic application in dry and wet media: a new approach.

    PubMed

    Janjua, Muhammad Ramzan Saeed Ashraf; Jamil, Saba; Jahan, Nazish; Khan, Shanza Rauf; Mirza, Saima

    2017-05-31

    Morphologically controlled synthesis of ferric oxide nano/micro particles has been carried out by using solvothermal route. Structural characterization displays that the predominant morphologies are porous hollow spheres, microspheres, micro rectangular platelets, octahedral and irregular shaped particles. It is also observed that solvent has significant effect on morphology such as shape and size of the particles. All the morphologies obtained by using different solvents are nearly uniform with narrow size distribution range. The values of full width at half maxima (FWHM) of all the products were calculated to compare their size distribution. The FWHM value varies with size of the particles for example small size particles show polydispersity whereas large size particles have shown monodispersity. The size of particles increases with decrease in polarity of the solvent whereas their shape changes from spherical to rectangular/irregular with decrease in polarity of the solvent. The catalytic activities of all the products were investigated for both dry and wet processes such as thermal decomposition of ammonium per chlorate (AP) and reduction of 4-nitrophenol in aqueous media. The results indicate that each product has a tendency to act as a catalyst. The porous hollow spheres decrease the thermal decomposition temperature of AP by 140 °C and octahedral Fe 3 O 4 particles decrease the decomposition temperature by 30 °C. The value of apparent rate constant (k app ) of reduction of 4-NP has also been calculated.

  11. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE PAGES

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.; ...

    2017-08-24

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  12. Theoretical Infrared Spectra for Polycyclic Aromatic Hydrocarbon Neutrals, Cations and Anions

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.

    1995-01-01

    Calculations are carried out using density functional theory (DFT) to determine the harmonic frequencies and intensities of the neutrals and cations of thirteen polycyclic aromatic hydrocarbons (PAHs) up to the size of ovalene. Calculations are also carried out for a few PAH anions. The DFT harmonic frequencies, when uniformly scaled by the factor of 0.958 to account primarily for anharmonicity, agree with the matrix isolation fundamentals to within an average error of about 10 per centimeter. Electron correlation is found to significantly reduce the intensities of many of the cation harmonics, bringing them into much better agreement with the available experimental data. While the theoretical infrared spectra agree well with the experimental data for the neutral systems and for many of the cations, there are notable discrepancies with the experimental matrix isolation data for some PAH cations that are difficult to explain in terms of limitations in the calculations. In agreement with previous theoretical work, the present calculations show that the relative intensities for the astronomical unidentified infrared (UIR) bands agree reasonably well with those for a distribution of polycyclic aromatic hydrocarbon (PAH) cations, but not with a distribution of PAH neutrals. We also observe that the infrared spectra of highly symmetrical cations such as coronene agree much better with astronomical observations than do those of, for example, the polyacenes such as tetracene and pentacene. The total integrated intensities for the neutral species are found to increase linearly with size, while the total integrated intensities are much larger for the cations and scale more nearly quadratically with size. We conclude that emission from moderate-sized highly symmetric PAH cations such as coronene and larger could account for the UIR bands.

  13. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  14. Technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6-μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, Dan; Zaarur, O.; Zaarur, S.; Kopeika, Norman S.

    1994-10-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. In this implementation attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 micrometers wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  15. A technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6 μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, D.; Zaarur, O.; Zaarur, S.

    1995-12-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. Attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 μm wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  16. An improved stereologic method for three-dimensional estimation of particle size distribution from observations in two dimensions and its application.

    PubMed

    Xu, Yi-Hua; Pitot, Henry C

    2003-09-01

    Single enzyme-altered hepatocytes; altered hepatic foci (AHF); and nodular lesions have been implicated, respectively in the processes of initiation, promotion, and progression in rodent hepatocarcinogenesis. Qualitative and quantitative analyses of such lesions have been utilized both to identify and to determine the potency of initiating, promoting, and progressor agents in rodent liver. Of a number of possible parameters determined in the study of such lesions, estimation of the number of foci or nodules in the liver is very important. The method of Saltykov has been used for estimating the number of AHF in rat liver. However, in practice, the Saltykov calculation has at least two weak points: (a) the size class range is limited to 12, which in many instances is too narrow to cover the range of AHF data obtained; and (b) under some conditions, the Saltykov equation generates negative values in several size classes, an obvious impossibility in the real world. In order to overcome these limitations in the Saltykov calculations, a study of the particle size distribution in a wide-range, polydispersed sphere system was performed. A stereologic method, termed the 25F Association method, was developed from this study. This method offers 25 association factors that are derived from the frequency of different-sized transections obtained from transecting a spherical particle, thus expanding the size class range to be analyzed up to 25, which is sufficiently wide to encompass all rat AHF found in most cases. This method exhibits greater flexibility, which allows adjustments to be made within the calculation process when NA((k,k)), the net number of transections from the same size spheres, was found to be a negative value, which is not possible in real situations. The reliability of the 25F Association method was tested thoroughly by computer simulation in both monodispersed and polydispersed sphere systems. The test results were compared with the original Saltykov method. We found that the 25F Association method yielded a better estimate of the total number of spheres in the three-dimensional tissue sample as well as the detailed size distribution information. Although the 25F Association method was derived from the study of a polydispersed sphere system, it can be used for continuous size distribution sphere systems. Application of this method to the estimation of parameters of preneoplastic foci in rodent liver is presented as an example of its utility. An application software program, 3D_estimation.exe, which uses the 25F Association method to estimate the number of AHF in rodent liver, has been developed and is now available at the website of this laboratory.

  17. Gap length effect on electron energy distribution in capacitive radio frequency discharges

    NASA Astrophysics Data System (ADS)

    You, S. J.; Kim, S. S.; Kim, Jung-Hyung; Seong, Dae-Jin; Shin, Yong-Hyeon; Chang, H. Y.

    2007-11-01

    A study on the dependence of electron energy distribution function (EEDF) on discharge gap size in capacitive rf discharges was conducted. The evolution of the EEDF over a gap size range from 2.5to7cm in 65mTorr Ar discharges was investigated both experimentally and theoretically. The measured EEDFs exhibited typical bi-Maxwellian forms with low energy electron groups. A significant depletion in the low energy portion of the bi-Maxwellian was found with decreasing gap size. The results show that electron heating by bulk electric fields, which is the main heating process of the low-energy electrons, is greatly enhanced as the gap size decreases, resulting in the abrupt change of the EEDF. The calculated EEDFs based on nonlocal kinetic theory are in good agreement with the experiments.

  18. The Effect of Microstructure on Fretting Fatigue Behavior of Nickel Alloy IN-100

    DTIC Science & Technology

    2007-03-01

    microstructure there are grains with an average size of 6 microns. (Milligan et al) [16] The large globular particles are Ni3Al ( Padula , Milligan et al.) [17...had better crack propagation resistance. Padula , Milligan et al. [17] in studied of the effect of grain size and precipitate distribution on the...threshold of endurance strength with an increase in grain size. Finally Padula could not find a calculation method of CK1Δ that matched his data even

  19. Numerical analysis of fundamental mode selection of a He-Ne laser by a circular aperture

    NASA Astrophysics Data System (ADS)

    He, Xin; Zhang, Bin

    2011-11-01

    In the He-Ne laser with an integrated cavity made of zerodur, the inner face performance of the gain tube is limited by the machining techniques, which tends to influence the beam propagation and transverse mode distribution. In order to improve the beam quality and select out the fundamental mode, an aperture is usually introduced in the cavity. In the process of laser design, the Fresnel-Kirchhoff diffraction integral equation is adopted to calculate the optical field distributions on each interface. The transit matrix is obtained based on self-reproducing principle and finite element method. Thus, optical field distribution on any interface and field loss of each transverse mode could be acquired by solving the eigenvalue and eigenvector of the transit matrix. For different-sized apertures in different positions, we could get different matrices and corresponding calculation results. By comparing these results, the optimal size and position of the aperture could be obtained. As a result, the feasibility of selecting fundamental mode in a zerodur He-Ne laser by a circular aperture has been verified theoretically.

  20. Packing Optimization of Sorbent Bed Containing Dissimilar and Irregular Shaped Media

    NASA Technical Reports Server (NTRS)

    Holland, Nathan; Guttromson, Jayleen; Piowaty, Hailey

    2011-01-01

    The Fire Cartridge is a packed bed air filter with two different and separate layers of media designed to provide respiratory protection from combustion products after a fire event on the International Space Station (ISS). The first layer of media is a carbon monoxide catalyst and the second layer of media is universal carbon. During development of Fire Cartridge prototypes, the two media beds were noticed to have shifted inside the cartridge. The movement of media within the cartridge can cause mixing of the bed layers, air voids, and channeling, which could cause preferential air flow and allow contaminants to pass through without removal. An optimally packed bed mitigates these risks and ensures effective removal of contaminants from the air. In order to optimally pack each layer, vertical, horizontal, and orbital agitations were investigated and a packed bulk density was calculated for each method. Packed bulk density must be calculated for each media type to accommodate variations in particle size, shape, and density. Additionally, the optimal vibration parameters must be re-evaluated for each batch of media due to variations in particle size distribution between batches. For this application it was determined that orbital vibrations achieve an optimal pack density and the two media layers can be packed by the same method. Another finding was media with a larger size distribution of particles achieve an optimal bed pack easier than media with a smaller size distribution of particles.

  1. SU-E-T-454: Impact of Calculation Grid Size On Dosimetry and Radiobiological Parameters for Head and Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, S; Das, I; Indiana University Health Methodist Hospital, Indianapolis, IN

    2014-06-01

    Purpose: IMRT has become standard of care for complex treatments to optimize dose to target and spare normal tissues. However, the impact of calculation grid size is not widely known especially dose distribution, tumor control probability (TCP) and normal tissue complication probability (NTCP) which is investigated in this study. Methods: Ten head and neck IMRT patients treated with 6 MV photons were chosen for this study. Using Eclipse TPS, treatment plans were generated for different grid sizes in the range 1–5 mm for the same optimization criterion with specific dose-volume constraints. The dose volume histogram (DVH) was calculated for allmore » IMRT plans and dosimetric data were compared. ICRU-83 dose points such as D2%, D50%, D98%, as well as the homogeneity and conformity indices (HI, CI) were calculated. In addition, TCP and NTCP were calculated from DVH data. Results: The PTV mean dose and TCP decreases with increasing grid size with an average decrease in mean dose by 2% and TCP by 3% respectively. Increasing grid size from 1–5 mm grid size, the average mean dose and NTCP for left parotid was increased by 6.0% and 8.0% respectively. Similar patterns were observed for other OARs such as cochlea, parotids and spinal cord. The HI increases up to 60% and CI decreases on average by 3.5% between 1 and 5 mm grid that resulted in decreased TCP and increased NTCP values. The number of points meeting the gamma criteria of ±3% dose difference and ±3mm DTA was higher with a 1 mm on average (97.2%) than with a 5 mm grid (91.3%). Conclusion: A smaller calculation grid provides superior dosimetry with improved TCP and reduced NTCP values. The effect is more pronounced for smaller OARs. Thus, the smallest possible grid size should be used for accurate dose calculation especially in H and N planning.« less

  2. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  3. Evaluation of char combustion models: measurement and analysis of variability in char particle size and density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maloney, Daniel J; Monazam, Esmail R; Casleton, Kent H

    Char samples representing a range of combustion conditions and extents of burnout were obtained from a well-characterized laminar flow combustion experiment. Individual particles from the parent coal and char samples were characterized to determine distributions in particle volume, mass, and density at different extent of burnout. The data were then compared with predictions from a comprehensive char combustion model referred to as the char burnout kinetics model (CBK). The data clearly reflect the particle- to-particle heterogeneity of the parent coal and show a significant broadening in the size and density distributions of the chars resulting from both devolatilization and combustion.more » Data for chars prepared in a lower oxygen content environment (6% oxygen by vol.) are consistent with zone II type combustion behavior where most of the combustion is occurring near the particle surface. At higher oxygen contents (12% by vol.), the data show indications of more burning occurring in the particle interior. The CBK model does a good job of predicting the general nature of the development of size and density distributions during burning but the input distribution of particle size and density is critical to obtaining good predictions. A significant reduction in particle size was observed to occur as a result of devolatilization. For comprehensive combustion models to provide accurate predictions, this size reduction phenomenon needs to be included in devolatilization models so that representative char distributions are carried through the calculations.« less

  4. Effect of data gaps on correlation dimension computed from light curves of variable stars

    NASA Astrophysics Data System (ADS)

    George, Sandip V.; Ambika, G.; Misra, R.

    2015-11-01

    Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.

  5. Particle size and composition distribution analysis of automotive brake abrasion dusts for the evaluation of antimony sources of airborne particulate matter

    NASA Astrophysics Data System (ADS)

    Iijima, Akihiro; Sato, Keiichi; Yano, Kiyoko; Tago, Hiroshi; Kato, Masahiko; Kimura, Hirokazu; Furuta, Naoki

    Abrasion dusts from three types of commercially available non-steel brake pads were generated by a brake dynamometer at disk temperatures of 200, 300 and 400 °C. The number concentration of the abrasion dusts and their aerodynamic diameters ( Dp) were measured by using an aerodynamic particle sizer (APS) spectrometer with high temporal and size resolution. Simultaneously, the abrasion dusts were also collected based on their size by using an Andersen low-volume sampler, and the concentrations of metallic elements (K, Ti, Fe, Cu, Zn, Sb and Ba) in the size-classified dusts were measured by ICP-AES and ICP-MS. The number distributions of the brake abrasion dusts had a peak at Dp values of 1 and 2 μm; this peak shifted to the coarse side with an increase in the disk temperature. The mass distributions calculated from the number distributions have peaks between Dp values of 3 and 6 μm. The shapes of the elemental mass distributions (Ti, Fe, Cu, Zn, Sb and Ba) in size-classified dusts were very similar to the total mass distributions of the brake abrasion dusts. These experimental results indicated that the properties of brake abrasion dusts were consistent with the characteristics of Sb-enriched fine airborne particulate matter. Based on these findings and statistical data, the estimation of Sb emission as airborne particulate matter from friction brakes was also discussed.

  6. Calculation of the force acting on a micro-sized particle with optical vortex array laser beam tweezers

    NASA Astrophysics Data System (ADS)

    Kuo, Chun-Fu; Chu, Shu-Chun

    2013-03-01

    Optical vortices possess several special properties, including carrying optical angular momentum (OAM) and exhibiting zero intensity. Vortex array laser beams have attracts many interests due to its special mesh field distributions, which show great potential in the application of multiple optical traps and dark optical traps. Previously study developed an Ince-Gaussian Mode (IGM)-based vortex array laser beam1. This study develops a simulation model based on the discrete dipole approximation (DDA) method for calculating the resultant force acting on a micro-sized spherical dielectric particle that situated at the beam waist of the IGM-based vortex array laser beams1.

  7. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  8. Sample Size Requirements and Study Duration for Testing Main Effects and Interactions in Completely Randomized Factorial Designs When Time to Event is the Outcome

    PubMed Central

    Moser, Barry Kurt; Halabi, Susan

    2013-01-01

    In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661

  9. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  10. A novel approach for fit analysis of thermal protective clothing using three-dimensional body scanning.

    PubMed

    Lu, Yehu; Song, Guowen; Li, Jun

    2014-11-01

    The garment fit played an important role in protective performance, comfort and mobility. The purpose of this study is to quantify the air gap to quantitatively characterize a three-dimensional (3-D) garment fit using a 3-D body scanning technique. A method for processing of scanned data was developed to investigate the air gap size and distribution between the clothing and human body. The mesh model formed from nude and clothed body was aligned, superimposed and sectioned using Rapidform software. The air gap size and distribution over the body surface were analyzed. The total air volume was also calculated. The effects of fabric properties and garment size on air gap distribution were explored. The results indicated that average air gap of the fit clothing was around 25-30 mm and the overall air gap distribution was similar. The air gap was unevenly distributed over the body and it was strongly associated with the body parts, fabric properties and garment size. The research will help understand the overall clothing fit and its association with protection, thermal and movement comfort, and provide guidelines for clothing engineers to improve thermal performance and reduce physiological burden. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Local-Field Distribution of Two Dielectric Inclusions at Small Separation

    NASA Astrophysics Data System (ADS)

    Siu, Yuet-Lun; Yu, Kin-Wah

    2001-03-01

    When two dielectric inclusions approach to each other in a composite medium, significant mutual polarization effects must occur. These effects are multipolar in nature and are difficult to treat from first principles(J. D. Jackson, Classical Electrodynamics), 2nd edition, (Wiley, New York, 1975).. In this work, we employ the discrete-dipole theory(B. T. Draine and P. J. Flatau, J. Opt. Soc. Am. A 11) 1491 (1994). to account for the mutual polarization effects by dividing the inclusions into many small subparts. We begin the calculation at small inclusion sizes and large separation, where the point-dipole limit being valid, and proceed to larger inclusion sizes and small separation, for which the mutual polarization effect becomes important. Then, we apply the theory to determine the dipole moment of each subpart self-consistently. In this way, each dipole moment yields the local electric field, which in turn polarizes the neighboring dipoles. We also begin the calculation at small inclusion sizes and large separation, where the point-dipole limit being valid, and proceed to larger inclusion sizes and small separation. Our resluts indicate that convergence is achieved with moderate computational effects. The results produce valuable information about the local electric field distribution, which is relevant to optical absorption due to surface phonon-polaritons of ionic microcrystals.

  12. Particle size distribution of mainstream tobacco and marijuana smoke. Analysis using the electrical aerosol analyzer.

    PubMed

    Anderson, P J; Wilson, J D; Hiller, F C

    1989-07-01

    Accurate measurement of cigarette smoke particle size distribution is important for estimation of lung deposition. Most prior investigators have reported a mass median diameter (MMD) in the size range of 0.3 to 0.5 micron, with a small geometric standard deviation (GSD), indicating few ultrafine (less than 0.1 micron) particles. A few studies, however, have suggested the presence of ultrafine particles by reporting a smaller count median diameter (CMD). Part of this disparity may be due tot he inefficiency to previous sizing methods in measuring ultrafine size range, to evaluate size distribution of smoke from standard research cigarettes, commercial filter cigarettes, and from marijuana cigarettes with different delta 9-tetrahydrocannabinol contents. Four 35-cm3, 2-s puffs were generated at 60-s intervals, rapidly diluted, and passed through a charge neutralizer and into a 240-L chamber. Size distribution for six cigarettes of each type was measured, CMD and GSD were determined from a computer-generated log probability plot, and MMD was calculated. The size distribution parameters obtained were similar for all cigarettes tested, with an average CMD of 0.1 micron, a MMD of 0.38 micron, and a GSD of 2.0. The MMD found using the EAA is similar to that previously reported, but the CMD is distinctly smaller and the GSD larger, indicating the presence of many more ultrafine particles. These results may explain the disparity of CMD values found in existing data. Ultrafine particles are of toxicologic importance because their respiratory tract deposition is significantly higher than for particles 0.3 to 0.5 micron and because their large surface area facilitates adsorption and delivery of potentially toxic gases to the lung.

  13. The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.

  14. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  15. The Physics of Protoplanetary Dust Agglomerates. X. High-velocity Collisions between Small and Large Dust Agglomerates as a Growth Barrier

    NASA Astrophysics Data System (ADS)

    Schräpler, Rainer; Blum, Jürgen; Krijt, Sebastiaan; Raabe, Jan-Hendrik

    2018-01-01

    In a protoplanetary disk, dust aggregates in the μm to mm size range possess mean collision velocities of 10–60 m s‑1 with respect to dm- to m-sized bodies. We performed laboratory collision experiments to explore this parameter regime and found a size- and velocity-dependent threshold between erosion and growth. By using a local Monte Carlo coagulation calculation and along with a simple semi-analytical timescale approach, we show that erosion considerably limits particle growth in protoplanetary disks and leads to a steady-state dust-size distribution from μm- to dm-sized particles.

  16. Precipitating Condensation Clouds in Substellar Atmospheres

    NASA Technical Reports Server (NTRS)

    Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)

    2000-01-01

    We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.

  17. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  18. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Global direct radiative forcing by process-parameterized aerosol optical properties

    NASA Astrophysics Data System (ADS)

    KirkevâG, Alf; Iversen, Trond

    2002-10-01

    A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.

  20. Effect of magnetic soft phase on the magnetic properties of bulk anisotropic Nd2Fe14B/α-Fe nanocomposite permanent magnets

    NASA Astrophysics Data System (ADS)

    Li, Yuqing; Yue, Ming; Zhao, Guoping; Zhang, Hongguo

    2018-01-01

    The effects of soft phase with different particle sizes and distributions on the Nd2Fe14B/α-Fe nanocomposite magnets have been studied by the micro-magnetism simulation. The calculated results show that smaller and/or scattered distribution of soft phase can benefit to the coercivity (H ci) of the nanocomposite magnets. The magnetization moment evolution during magnetic reversal is systematically analyzed. On the other hand, magnetic properties of anisotropic Nd-Fe-B/α-Fe nanocomposite magnets prepared by hot pressing and hot deformation methods also provide evidences for the calculated results.

  1. Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo

    2018-03-01

    A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.

  2. Computational studies of photoluminescence from disordered nanocrystalline systems

    NASA Astrophysics Data System (ADS)

    John, George

    2000-03-01

    The size (d) dependence of emission energies from semiconductor nanocrystallites have been shown to follow an effective exponent ( d^-β) determined by the disorder in the system(V.Ranjan, V.A.Singh and G.C.John, Phys. Rev B 58), 1158 (1998). Our earlier calculation was based on a simple quantum confinement model assuming a normal distribution of crystallites. This model is now extended to study the effects of realistic systems with a lognormal distribution in particle size, accounting for carrier hopping and nonradiative transitions. Computer simulations of this model performed using the Microcal Origin software can explain several conflicting experimental results reported in literature.

  3. Accurate Characterization of Rain Drop Size Distribution Using Meteorological Particle Spectrometer and 2D Video Disdrometer for Propagation and Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Thurai, Merhala; Bringi, Viswanathan; Kennedy, Patrick; Notaros, Branislav; Gatlin, Patrick

    2017-01-01

    Accurate measurements of rain drop size distributions (DSD), with particular emphasis on small and tiny drops, are presented. Measurements were conducted in two very different climate regions, namely Northern Colorado and Northern Alabama. Both datasets reveal a combination of (i) a drizzle mode for drop diameters less than 0.7 mm and (ii) a precipitation mode for larger diameters. Scattering calculations using the DSDs are performed at S and X bands and compared with radar observations for the first location. Our accurate DSDs will improve radar-based rain rate estimates as well as propagation predictions.

  4. Anthropic prediction for a large multi-jump landscape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz-Perlov, Delia, E-mail: delia@perlov.com

    2008-10-15

    The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy 'single-jump' landscape model. We showed that it is possible for the fractal prior distribution that we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single-jump size is large enough. We extend this work here by investigating a large (N{approx}10{sup 500}) toy 'multi-jump' landscape model. The jump sizesmore » range over three orders of magnitude and an overall free parameter c determines the absolute size of the jumps. We will show that for 'large' c the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small c, the distribution may not be smooth.« less

  5. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  6. Understanding asteroid collisional history through experimental and numerical studies

    NASA Technical Reports Server (NTRS)

    Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.

    1991-01-01

    Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.

  7. Understanding asteroid collisional history through experimental and numerical studies

    NASA Astrophysics Data System (ADS)

    Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.

    1991-06-01

    Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.

  8. Breath Figures under Electrowetting: Electrically Controlled Evolution of Drop Condensation Patterns

    NASA Astrophysics Data System (ADS)

    Baratian, Davood; Dey, Ranabir; Hoek, Harmen; van den Ende, Dirk; Mugele, Frieder

    2018-05-01

    We show that electrowetting (EW) with structured electrodes significantly modifies the distribution of drops condensing onto flat hydrophobic surfaces by aligning the drops and by enhancing coalescence. Numerical calculations demonstrate that drop alignment and coalescence are governed by the drop-size-dependent electrostatic energy landscape that is imposed by the electrode pattern and the applied voltage. Such EW-controlled migration and coalescence of condensate drops significantly alter the statistical characteristics of the ensemble of droplets. The evolution of the drop size distribution displays self-similar characteristics that significantly deviate from classical breath figures on homogeneous surfaces once the electrically induced coalescence cascades set in beyond a certain critical drop size. The resulting reduced surface coverage, coupled with earlier drop shedding under EW, enhances the net heat transfer.

  9. SIZE DISTRIBUTIONS OF ELEMENTAL CARBON IN ATMOSPHERIC AEROSOLS

    EPA Science Inventory

    Environmental problems caused by atmospheric aerosols are well documented in the specialized literature. Studies reporting on the role of dense clouds of soil particles in past mass extinctions of life on Earth and, more recently (Turco et al., 1983), on calculations of potential...

  10. A fast three-dimensional gamma evaluation using a GPU utilizing texture memory for on-the-fly interpolations.

    PubMed

    Persoon, Lucas C G G; Podesta, Mark; van Elmpt, Wouter J C; Nijsten, Sebastiaan M J J G; Verhaegen, Frank

    2011-07-01

    A widely accepted method to quantify differences in dose distributions is the gamma (gamma) evaluation. Currently, almost all gamma implementations utilize the central processing unit (CPU). Recently, the graphics processing unit (GPU) has become a powerful platform for specific computing tasks. In this study, we describe the implementation of a 3D gamma evaluation using a GPU to improve calculation time. The gamma evaluation algorithm was implemented on an NVIDIA Tesla C2050 GPU using the compute unified device architecture (CUDA). First, several cubic virtual phantoms were simulated. These phantoms were tested with varying dose cube sizes and set-ups, introducing artificial dose differences. Second, to show applicability in clinical practice, five patient cases have been evaluated using the 3D dose distribution from a treatment planning system as the reference and the delivered dose determined during treatment as the comparison. A calculation time comparison between the CPU and GPU was made with varying thread-block sizes including the option of using texture or global memory. A GPU over CPU speed-up of 66 +/- 12 was achieved for the virtual phantoms. For the patient cases, a speed-up of 57 +/- 15 using the GPU was obtained. A thread-block size of 16 x 16 performed best in all cases. The use of texture memory improved the total calculation time, especially when interpolation was applied. Differences between the CPU and GPU gammas were negligible. The GPU and its features, such as texture memory, decreased the calculation time for gamma evaluations considerably without loss of accuracy.

  11. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    NASA Astrophysics Data System (ADS)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  12. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    USGS Publications Warehouse

    Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  13. Calculation of the equilibrium distribution for a deleterious gene by the finite Fourier transform.

    PubMed

    Lange, K

    1982-03-01

    In a population of constant size every deleterious gene eventually attains a stochastic equilibrium between mutation and selection. The individual probabilities of this equilibrium distribution can be computed by an application of the finite Fourier transform to an appropriate branching process formula. Specific numerical examples are discussed for the autosomal dominants, Huntington's chorea and chondrodystrophy, and for the X-linked recessive, Becker's muscular dystrophy.

  14. Is distribution of health expenditure in Iran pro-poor?

    PubMed

    Emamgholipour, Sara; Agheli, Lotfali

    2018-05-03

    The size and distribution of households' health care expenditure indicate the financial burden on different income groups. Since the distribution of health expenditure evaluates the performance of health systems, this study aims to examine the health expenditure distribution among urban and rural households in Iran. This research was conducted on the distribution of health expenditure among urban and rural households in 2014. The effects of households' health expenditure on distribution of personal incomes were measured by using Kakwani and Reynolds-Smolensky indices. In addition, Theil T index was used to classify provinces based on inequality in health expenditure distribution. The calculations were made by using EXCEL. The Kakwani indices for urban and rural households were calculated around -0.572 and -0.485, respectively. Reynolds-Smolensky indices for urban and rural households were measured as much as -0.038 and -0.031, respectively. Regardless of income distribution, Theil T index shows that urban households face with the most unequal distribution in health expenditure. Based on calculations, the distribution of health expenditure is against the poor households. In addition, this distribution is more regressive in urban than rural households. As well, Reynolds-Smolensky indices indicate more uneven income distribution after paying for health care, and inequality is larger among urban than rural households. To this research, the health policymaking priorities should be given to the provinces with the highest inequality, and the expenditure burden of low-income households should be reduced through expanding insurance coverage. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  16. How the Emitted Size Distribution and Mixing State of Feldspar Affect Ice Nucleating Particles in a Global Model

    NASA Technical Reports Server (NTRS)

    Perlwitz, Jan P.; Fridlind, Ann M.; Knopf, Daniel A.; Miller, Ron L.; García-Pando, Carlos Perez

    2017-01-01

    The effect of aerosol particles on ice nucleation and, in turn, the formation of ice and mixed phase clouds is recognized as one of the largest sources of uncertainty in climate prediction. We apply an improved dust mineral specific aerosol module in the NASA GISS Earth System ModelE, which takes into account soil aggregates and their fragmentation at emission as well as the emission of large particles. We calculate ice nucleating particle concentrations from K-feldspar abundance for an active site parameterization for a range of activation temperatures and external and internal mixing assumption. We find that the globally averaged INP concentration is reduced by a factor of two to three, compared to a simple assumption on the size distribution of emitted dust minerals. The decrease can amount to a factor of five in some geographical regions. The results vary little between external and internal mixing and different activation temperatures, except for the coldest temperatures. In the sectional size distribution, the size range 24 micrometer contributes the largest INP number.

  17. How the Emitted Size Distribution and Mixing State of Feldspar Affect Ice Nucleating Particles in a Global Model

    NASA Astrophysics Data System (ADS)

    Perlwitz, J. P.; Fridlind, A. M.; Knopf, D. A.; Miller, R. L.; Pérez García-Pando, C.

    2017-12-01

    The effect of aerosol particles on ice nucleation and, in turn, the formation of ice and mixed phase clouds is recognized as one of the largest sources of uncertainty in climate prediction. We apply an improved dust mineral specific aerosol module in the NASA GISS Earth System ModelE, which takes into account soil aggregates and their fragmentation at emission as well as the emission of large particles. We calculate ice nucleating particle concentrations from K-feldspar abundance for an active site parameterization for a range of activation temperatures and external and internal mixing assumption. We find that the globally averaged INP concentration is reduced by a factor of two to three, compared to a simple assumption on the size distribution of emitted dust minerals. The decrease can amount to a factor of five in some geographical regions. The results vary little between external and internal mixing and different activation temperatures, except for the coldest temperatures. In the sectional size distribution, the size range 2-4 μm contributes the largest INP number.

  18. Viscous Particle Breakup within a Cooling Nuclear Fireball

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, J. T.; Knight, K. B.; Dai, Z.

    2016-10-04

    Following the surface detonation of a nuclear weapon, the Earth’s crust and immediate surroundings are drawn into the fireball and form melts. Fallout is formed as these melts incorporate radioactive material from the bomb vapor and cool rapidly. The resultant fallout plume and dispersion of radioactive contamination is a function of several factors including weather patterns and fallout particle shapes and size distributions. Accurate modeling of the size distributions of fallout forms an important data point for dispersion codes that calculate the aerial distribution of fallout. While morphological evidence for aggregation of molten droplets is well documented in fallout glassmore » populations, the breakup of these molten droplets has not been similarly studied. This study documents evidence that quenched fallout populations preserve evidence of molten breakup mechanisms.« less

  19. Rank Diversity of Languages: Generic Behavior in Computational Linguistics

    PubMed Central

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: “heads” consist of words which almost do not change their rank in time, “bodies” are words of general use, while “tails” are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied. PMID:25849150

  20. Rank diversity of languages: generic behavior in computational linguistics.

    PubMed

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: "heads" consist of words which almost do not change their rank in time, "bodies" are words of general use, while "tails" are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied.

  1. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism.

    PubMed

    Levman, Jacob; Takahashi, Emi; Forgeron, Cynthia; MacDonald, Patrick; Stewart, Natalie; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital , Harvard Medical School , demonstrating that it can potentially play a constructive role in future healthcare technologies.

  2. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism

    PubMed Central

    Takahashi, Emi; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital, Harvard Medical School, demonstrating that it can potentially play a constructive role in future healthcare technologies. PMID:29796236

  3. Statistical characteristics of dynamics for population migration driven by the economic interests

    NASA Astrophysics Data System (ADS)

    Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui

    2016-06-01

    Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.

  4. Percolation technique for galaxy clustering

    NASA Technical Reports Server (NTRS)

    Klypin, Anatoly; Shandarin, Sergei F.

    1993-01-01

    We study percolation in mass and galaxy distributions obtained in 3D simulations of the CDM, C + HDM, and the power law (n = -1) models in the Omega = 1 universe. Percolation statistics is used here as a quantitative measure of the degree to which a mass or galaxy distribution is of a filamentary or cellular type. The very fast code used calculates the statistics of clusters along with the direct detection of percolation. We found that the two parameters mu(infinity), characterizing the size of the largest cluster, and mu-squared, characterizing the weighted mean size of all clusters excluding the largest one, are extremely useful for evaluating the percolation threshold. An advantage of using these parameters is their low sensitivity to boundary effects. We show that both the CDM and the C + HDM models are extremely filamentary both in mass and galaxy distribution. The percolation thresholds for the mass distributions are determined.

  5. Biomagnification and tissue distribution of perfluoroalkyl substances (PFASs) in market-size rainbow trout (Oncorhynchus mykiss).

    PubMed

    Goeritz, Ina; Falk, Sandy; Stahl, Thorsten; Schäfers, Christoph; Schlechtriem, Christian

    2013-09-01

    The present study investigated the biomagnification potential as well as the substance and tissue-specific distribution of perfluoroalkyl substances (PFASs) in market-size rainbow trout (Oncorhynchus mykiss). Rainbow trout with an average body weight of 314 ± 21 g were exposed to perfluorobutane sulfonate (PFBS), perfluorohexane sulfonate (PFHxS), perfluorooctane sulfonate (PFOS), perfluorooctanoic acid (PFOA), and perfluorononanoic acid (PFNA) in the diet for 28 d. The accumulation phase was followed by a 28-d depuration phase, in which the test animals were fed with nonspiked trout feed. On days 0, 7, 14, 28, 31, 35, 42, and 56 of the present study, fish were sampled from the test basin for PFAS analysis. Biomagnification factors (BMFs) for all test compounds were determined based on a kinetic approach. Distribution factors were calculated for each test compound to illustrate the disposition of PFASs in rainbow trout after 28 d of exposure. Dietary exposure of market-size rainbow trout to PFASs did not result in biomagnification; BMF values were calculated as 0.42 for PFOS, >0.23 for PFNA, >0.18 for PFHxS, >0.04 for PFOA, and >0.02 for PFBS, which are below the biomagnification threshold of 1. Liver, blood, kidney, and skin were identified as the main target tissues for PFASs in market-size rainbow trout. Evidence was shown that despite relative low PFAS contamination, the edible parts of the fish (the fillet and skin) can significantly contribute to the whole-body burden. Copyright © 2013 SETAC.

  6. A theoretical study of water equilibria: The cluster distribution versus temperature and pressure for (H2O)n, n=1-60, and ice

    NASA Astrophysics Data System (ADS)

    Lenz, Annika; Ojamäe, Lars

    2009-10-01

    The size distribution of water clusters at equilibrium is studied using quantum-chemical calculations in combination with statistical thermodynamics. The necessary energetic data is obtained by quantum-chemical B3LYP computations and through extrapolations from the B3LYP results for the larger clusters. Clusters with up to 60 molecules are included in the equilibrium computations. Populations of different cluster sizes are calculated using both an ideal gas model with noninteracting clusters and a model where a correction for the interaction energy is included analogous to the van der Waals law. In standard vapor the majority of the water molecules are monomers. For the ideal gas model at 1 atm large clusters [56-mer (0-120 K) and 28-mer (100-260 K)] dominate at low temperatures and separate to smaller clusters [21-22-mer (170-280 K) and 4-6-mer (270-320 K) and to monomers (300-350 K)] when the temperature is increased. At lower pressure the transition from clusters to monomers lies at lower temperatures and fewer cluster sizes are formed. The computed size distribution exhibits enhanced peaks for the clusters consisting of 21 and 28 water molecules; these sizes are for protonated water clusters often referred to as magic numbers. If cluster-cluster interactions are included in the model the transition from clusters to monomers is sharper (i.e., occurs over a smaller temperature interval) than when the ideal-gas model is used. Clusters with 20-22 molecules dominate in the liquid region. When a large icelike cluster is included it will dominate for temperatures up to 325 K for the noninteracting clusters model. Thermodynamic properties (Cp, ΔH) were calculated with in general good agreement with experimental values for the solid and gas phase. A formula for the number of H-bond topologies in a given cluster structure is derived. For the 20-mer it is shown that the number of topologies contributes to making the population of dodecahedron-shaped cluster larger than that of a lower-energy fused prism cluster at high temperatures.

  7. A theoretical study of water equilibria: the cluster distribution versus temperature and pressure for (H2O)n, n = 1-60, and ice.

    PubMed

    Lenz, Annika; Ojamäe, Lars

    2009-10-07

    The size distribution of water clusters at equilibrium is studied using quantum-chemical calculations in combination with statistical thermodynamics. The necessary energetic data is obtained by quantum-chemical B3LYP computations and through extrapolations from the B3LYP results for the larger clusters. Clusters with up to 60 molecules are included in the equilibrium computations. Populations of different cluster sizes are calculated using both an ideal gas model with noninteracting clusters and a model where a correction for the interaction energy is included analogous to the van der Waals law. In standard vapor the majority of the water molecules are monomers. For the ideal gas model at 1 atm large clusters [56-mer (0-120 K) and 28-mer (100-260 K)] dominate at low temperatures and separate to smaller clusters [21-22-mer (170-280 K) and 4-6-mer (270-320 K) and to monomers (300-350 K)] when the temperature is increased. At lower pressure the transition from clusters to monomers lies at lower temperatures and fewer cluster sizes are formed. The computed size distribution exhibits enhanced peaks for the clusters consisting of 21 and 28 water molecules; these sizes are for protonated water clusters often referred to as magic numbers. If cluster-cluster interactions are included in the model the transition from clusters to monomers is sharper (i.e., occurs over a smaller temperature interval) than when the ideal-gas model is used. Clusters with 20-22 molecules dominate in the liquid region. When a large icelike cluster is included it will dominate for temperatures up to 325 K for the noninteracting clusters model. Thermodynamic properties (C(p), DeltaH) were calculated with in general good agreement with experimental values for the solid and gas phase. A formula for the number of H-bond topologies in a given cluster structure is derived. For the 20-mer it is shown that the number of topologies contributes to making the population of dodecahedron-shaped cluster larger than that of a lower-energy fused prism cluster at high temperatures.

  8. The effects of substrate size, surface area, and density on coat thickness of multi-particulate dosage forms.

    PubMed

    Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B

    2005-01-01

    Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.

  9. In Situ Aerosol Profile Measurements and Comparisons with SAGE 3 Aerosol Extinction and Surface Area Profiles at 68 deg North

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Under funding from this proposal three in situ profile measurements of stratospheric sulfate aerosol and ozone were completed from balloon-borne platforms. The measured quantities are aerosol size resolved number concentration and ozone. The one derived product is aerosol size distribution, from which aerosol moments, such as surface area, volume, and extinction can be calculated for comparison with SAGE III measurements and SAGE III derived products, such as surface area. The analysis of these profiles and comparison with SAGE III extinction measurements and SAGE III derived surface areas are provided in Yongxiao (2005), which comprised the research thesis component of Mr. Jian Yongxiao's M.S. degree in Atmospheric Science at the University of Wyoming. In addition analysis continues on using principal component analysis (PCA) to derive aerosol surface area from the 9 wavelength extinction measurements available from SAGE III. Ths paper will present PCA components to calculate surface area from SAGE III measurements and compare these derived surface areas with those available directly from in situ size distribution measurements, as well as surface areas which would be derived from PCA and Thomason's algorithm applied to the four wavelength SAGE II extinction measurements.

  10. Level II scour analysis for Bridge 23 (WODSTH00180023) on Town Highway 18, crossing North Bridgewater Brook, Woodstock, Vermont

    USGS Publications Warehouse

    Olson, Scott A.; Weber, Matthew A.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  11. Level II scour analysis for Bridge 22 (CRAFTH00180022) on Town Highway 18, crossing Black River, Craftsbury, Vermont

    USGS Publications Warehouse

    Ayotte, Joseph D.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  12. Level II scour analysis for Bridge 31 (ALBATH00380031) on Town Highway 38, crossing the Black River, Albany, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  13. Level II scour analysis for Bridge 25 (ALBATH00250030) on Town Highway 25, crossing the Black River, Albany, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  14. Level II scour analysis for Bridge 6 (IRASTH00050006) on Town Highway 5, crossing the Black River, Irasburg, Vermont

    USGS Publications Warehouse

    Olson, Scott A.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  15. Level II scour analysis for Bridge 2 (CRAFTH00590002) on Town Highway 59, crossing Black River, Craftsbury, Vermont

    USGS Publications Warehouse

    Ayotte, Joseph D.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  16. Level II scour analysis for Bridge 26 (CRAFTH00250026) on Town Highway 25, crossing Black River, Craftsbury, Vermont

    USGS Publications Warehouse

    Ayotte, Joseph D.

    1996-01-01

    Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.

  17. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  18. Influence of CT contrast agent on dose calculation of intensity modulated radiation therapy plan for nasopharyngeal carcinoma.

    PubMed

    Lee, F K-H; Chan, C C-L; Law, C-K

    2009-02-01

    Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.

  19. Influence of the weighing bar position in vessel on measurement of cement’s particle size distribution by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sihombing, R. O.; Simanjuntak, A.; Hanum, F.

    2018-02-01

    The buoyancy weighing-bar method is a new simple and cost-effective method to determine the particle size distribution both settling and floating particle. In this method, the density change in a suspension due to particle migration is measured by weighing buoyancy against a weighing-bar hung in the suspension, and then the particle size distribution is calculated using the length of the bar and the time-course change in the mass of the bar. The apparatus of this method consists of a weighing-bar and an analytical balance with a hook for under-floor weighing. The weighing bar is used to detect the density change in suspension. In this study we investigate the influences of position of weighing bar in vessel on settling particle size distribution measurements of cement by using the buoyancy weighing-bar method. The vessel used in this experiment is graduated cylinder with the diameter of 65 mm and the position of weighing bar is in center and off center of vessel. The diameter of weighing bar in this experiment is 10 mm, and the kerosene is used as a dispersion liquids. The results obtained show that the positions of weighing bar in vessel have no significant effect on determination the cement’s particle size distribution by using buoyancy weighing-bar method, and the results obtained are comparable to those measured by using settling balance method.

  20. Constraints on Particle Sizes in Saturn's G Ring from Ring Plane Crossing Observations

    NASA Astrophysics Data System (ADS)

    Throop, H. B.; Esposito, L. W.

    1996-09-01

    The ring plane crossings in 1995--96 allowed earth-based observations of Saturn's diffuse rings (Nicholson et al., Nature 272, 1996; De Pater et al. Icarus 121, 1996) at a phase angle of alpha ~ 5 deg . We calculate the G ring reflectance for steady state distributions of dust to km-sized bodies from a range of physical models which track the evolution of the G ring from its initial formation following the disruption of a progenitor satellite (Canup & Esposito 1996, \\ Icarus,\\ in press). We model scattering from the ring's small particles using an exact T-matrix method for nonspherical, absorptive particles (Mishchenko et al. 1996, \\ JGR Atmo., in press), large particles using the phase function and spectrum of Europa, and intermediate particles using a linear combination of the small and large limits. Two distinct particle size distributions from the CE96 model fit the observed spectrum. The first is that of a dusty ring, with the majority of ring reflectance in dust particles of relatedly shallow power law size distribution exponent q ~ 2.5. The second has equal reflectances from a) dust in the range q ~ 3.5 -- 6.5 and b) macroscopic bodies > 1 mm. In this second case, the respective slightly blue and red components combine to form the observed relatively flat spectrum. Although light scattering in backscatter is not sufficient to completely constrain the G ring size distribution, the distributions predicted by the CE96 model can explain the earth-based observations.

  1. Volume and surface area size distribution, water mass and model fitting of GCE/CASE/WATOX marine aerosols

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Sievering, H.; Boatman, J.

    1990-06-01

    As a part of the Global Change Expedition/Coordinated Air-Sea Experiment/Western Atlantic Ocean Experiment (GCE/CASE/WATOX), size distributions of marine aerosols were measured at two altitudes of about 2750 and 150 m above sea level (asl) over the size range 0.1 ˜ 32 μm. Lognormal fitting was applied to the corrected aerosol size spectra to determine the volume and surface area size distributions of the CASE-WATOX marine aerosols. Each aerosol size distribution was fitted with three lognormal distributions representing fine-, large-, and giant-particle modes. Water volume fraction and dry particle size of each aerosol size distribution were also calculated using empirical formulas for particle size as a function of relative humidity and particle type. Because of the increased influence from anthropogenic sources in the continental United States, higher aerosol volume concentrations were observed in the fine-particle mode near-shore off the east coast; 2.11 and 3.63 μm3 cm-3 for free troposphere (FT) and marine boundary layer (MBL), compared with the open-sea Bermuda area values; 0.13 and 0.74 μm3 cm-3 for FT and MBL. The large-particle mode exhibits the least variations in volume distributions between the east coast and open-sea Bermuda area, having a volume geometric median diameter (VGMD) between 1.4 and 1.6 μm and a geometric standard deviation between 1.57 and 1.68. For the giant-particle mode, larger VGMD and volume concentrations were observed for marine aerosols nearshore off the east coast than in the open-sea Bermuda area because of higher relative humidity and higher surface wind speed conditions. Wet VGMD and aerosol water volume concentrations at 15 m asl ship level were determined by extrapolating from those obtained by analysis of the CASE-WATOX aircraft aerosol data. Abundance of aerosol water in the MBL serves as an important pathway for heterogeneous conversion of SO2 in sea salt aerosol particles.

  2. Cosmic ray exposure ages of iron meteorites, complex irradiation and the constancy of cosmic ray flux in the past

    NASA Technical Reports Server (NTRS)

    Marti, K.; Lavielle, B.; Regnier, S.

    1984-01-01

    While previous calculations of potassium ages assumed a constant cosmic ray flux and a single stage (no change in size) exposure of iron meteorites, present calculations relaxed these constancy assumptions and the results reveal multistage irradiations for some 25% of the meteorites studied, implying multiple breakup in space. The distribution of exposure ages suggests several major collisions (based on chemical composition and structure), although the calibration of age scales is not yet complete. It is concluded that shielding-corrected (corrections which depend on size and position of sample) production rates are consistent for the age bracket of 300 to 900 years. These production rates differ in a systematic way from those calculated for present day fluxes of cosmic rays (such as obtained for the last few million years).

  3. RACORO aerosol data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elisabeth Andrews

    2011-10-31

    The RACORO aerosol data (cloud condensation nuclei (CCN), condensation nuclei (CN) and aerosol size distributions) need further processing to be useful for model evaluation (e.g., GCM droplet nucleation parameterizations) and other investigations. These tasks include: (1) Identification and flagging of 'splash' contaminated Twin Otter aerosol data. (2) Calculation of actual supersaturation (SS) values in the two CCN columns flown on the Twin Otter. (3) Interpolation of CCN spectra from SGP and Twin Otter to 0.2% SS. (4) Process data for spatial variability studies. (5) Provide calculated light scattering from measured aerosol size distributions. Below we first briefly describe the measurementsmore » and then describe the results of several data processing tasks that which have been completed, paving the way for the scientific analyses for which the campaign was designed. The end result of this research will be several aerosol data sets which can be used to achieve some of the goals of the RACORO mission including the enhanced understanding of cloud-aerosol interactions and improved cloud simulations in climate models.« less

  4. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  5. Light scattering by lunar-like particle size distributions

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1991-01-01

    A fundamental input to models of light scattering from planetary regoliths is the mean phase function of the regolith particles. Using the known size distribution for typical lunar soils, the mean phase function and mean linear polarization for a regolith volume element of spherical particles of any composition were calculated from Mie theory. The two contour plots given here summarize the changes in the mean phase function and linear polarization with changes in the real part of the complex index of refraction, n - ik, for k equals 0.01, the visible wavelength 0.55 micrometers, and the particle size distribution of the typical mature lunar soil 72141. A second figure is a similar index-phase surface, except with k equals 0.1. The index-phase surfaces from this survey are a first order description of scattering by lunar-like regoliths of spherical particles of arbitrary composition. They form the basis of functions that span a large range of parameter-space.

  6. 3-D Simulations of the Inner Dust Comae for Comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Marschall, Raphael; Liao, Ying; Su, Cheng-Chin; Wu, Jong-Shinn; Thomas, Nicolas; Rubin, Martin; Lai, Ian Lin; Ip, Wing-Huen; Keller, Horst Uwe; Knollenberg, Jörg; Kührt, Ekkehard; Skorov, Yuri; Altwegg, Kathrin; Vincent, Jean-Baptiste; Gicquel, Adeline; Shi, Xian; Sierks, Holger; Naletto, Giampiero

    2015-04-01

    The aims of this study are to (1) model the gas flow-field in the innermost coma for a plausible activity distributions of ROSETTA's target comet 67P/Churyumov-Gerasimenko (67P) using the SHAP2 model, (2) compare this with the ROSINA/COPS gas density (3) investigate the acceleration of dust by gas drag and the resulting dust distribution, (4) produce artificial images of the dust coma brightness as seen from different viewing geometries for a range of heliocentric distances and (5) compare the artificial images quantitatively with observations by the OSIRIS imaging system. We calculate the dust distribution in the coma within the first ten kilometers of the nucleus by assuming the dust to be spherical test particles in the gas field without any back coupling. The motion of the dust is driven by the drag force resulting from the gas flow. We assume a quadratic drag force with a velocity and temperature-dependent drag coefficient. The gravitational force of a point nucleus on the dust is also taken into account which will e.g. determine the maximal liftable size of the dust. Surface cohesion is not included. 40 dust sizes in the range between 8 nm and 0.3 mm are considered. For every dust size the dust densities and velocities are calculated by tracking around one million simulation particles in the gas field. We assume the distribution of dust according to size follows a power law, specifically the number of particles n or a particular radius r is specified by n ~ r-β with usual values of 3 ≤ β ≤ 4 where β = 3 corresponds to the case of equal mass per size and β = 4 to a shift of the mass towards the small particles. For the comparison with images of the high resolution camera OSIRIS on board ESAs ROSETTA spacecraft the viewing geometry of the camera can be specified and a line of sight integration through the dust density is performed. By means of Mie scattering on the particles the dust brightness can be determined. A variety of dust size distributions, gas to dust mass ratios, wavelengths and optical properties can thus be studied and compared with the data.

  7. Prediction of the size distributions of methanol-ethanol clusters detected in VUV laser/time-of-flight mass spectrometry.

    PubMed

    Liu, Yi; Consta, Styliani; Shi, Yujun; Lipson, R H; Goddard, William A

    2009-06-25

    The size distributions and geometries of vapor clusters equilibrated with methanol-ethanol (Me-Et) liquid mixtures were recently studied by vacuum ultraviolet (VUV) laser time-of-flight (TOF) mass spectrometry and density functional theory (DFT) calculations (Liu, Y.; Consta, S.; Ogeer, F.; Shi, Y. J.; Lipson, R. H. Can. J. Chem. 2007, 85, 843-852). On the basis of the mass spectra recorded, it was concluded that the formation of neutral tetramers is particularly prominent. Here we develop grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) frameworks to compute cluster size distributions in vapor mixtures that allow a direct comparison with experimental mass spectra. Using the all-atom optimized potential for liquid simulations (OPLS-AA) force field, we systematically examined the neutral cluster size distributions as functions of pressure and temperature. These neutral cluster distributions were then used to derive ionized cluster distributions to compare directly with the experiments. The simulations suggest that supersaturation at 12 to 16 times the equilibrium vapor pressure at 298 K or supercooling at temperature 240 to 260 K at the equilibrium vapor pressure can lead to the relatively abundant tetramer population observed in the experiments. Our simulations capture the most distinct features observed in the experimental TOF mass spectra: Et(3)H(+) at m/z = 139 in the vapor corresponding to 10:90% Me-Et liquid mixture and Me(3)H(+) at m/z = 97 in the vapors corresponding to 50:50% and 90:10% Me-Et liquid mixtures. The hybrid GCMC scheme developed in this work extends the capability of studying the size distributions of neat clusters to mixed species and provides a useful tool for studying environmentally important systems such as atmospheric aerosols.

  8. Power-law tails in the distribution of order imbalance

    NASA Astrophysics Data System (ADS)

    Zhang, Ting; Gu, Gao-Feng; Xu, Hai-Chuan; Xiong, Xiong; Chen, Wei; Zhou, Wei-Xing

    2017-10-01

    We investigate the probability distribution of order imbalance calculated from the order flow data of 43 Chinese stocks traded on the Shenzhen Stock Exchange. Two definitions of order imbalance are considered based on the order number and the order size. We find that the order imbalance distributions of individual stocks have power-law tails. However, the tail index fluctuates remarkably from stock to stock. We also investigate the distributions of aggregated order imbalance of all stocks at different timescales Δt. We find no clear trend in the tail index with respect Δt. All the analyses suggest that the distributions of order imbalance are asymmetric.

  9. Analysis of intergranular fission-gas bubble-size distributions in irradiated uranium-molybdenum alloy fuel

    NASA Astrophysics Data System (ADS)

    Rest, J.; Hofman, G. L.; Kim, Yeon Soo

    2009-04-01

    An analytical model for the nucleation and growth of intra and intergranular fission-gas bubbles is used to characterize fission-gas bubble development in low-enriched U-Mo alloy fuel irradiated in the advanced test reactor in Idaho as part of the Reduced Enrichment for Research and Test Reactor (RERTR) program. Fuel burnup was limited to less than ˜7.8 at.% U in order to capture the fuel-swelling stage prior to irradiation-induced recrystallization. The model couples the calculation of the time evolution of the average intergranular bubble radius and number density to the calculation of the intergranular bubble-size distribution based on differential growth rate and sputtering coalescence processes. Recent results on TEM analysis of intragranular bubbles in U-Mo were used to set the irradiation-induced diffusivity and re-solution rate in the bubble-swelling model. Using these values, good agreement was obtained for intergranular bubble distribution compared against measured post-irradiation examination (PIE) data using grain-boundary diffusion enhancement factors of 15-125, depending on the Mo concentration. This range of enhancement factors is consistent with values obtained in the literature.

  10. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  11. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  12. Simulation of particle size distributions in Polar Mesospheric Clouds from Microphysical Models

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Merkel, A.; Bardeen, C.; Rusch, D. W.; Lumpe, J. D.

    2009-12-01

    The size distribution of ice particles is perhaps the most important observable aspect of microphysical processes in Polar Mesospheric Cloud (PMC) formation and evolution. A conventional technique to derive such information is from optical observation of scattering, either passive solar scattering from photometric or spectrometric techniques, or active backscattering by lidar. We present simulated size distributions from two state-of-the-art models using CARMA sectional microphysics: WACCM/CARMA, in which CARMA is interactively coupled with WACCM3 (Bardeen et al, 2009), and stand-alone CARMA forced by WACCM3 meteorology (Merkel et al, this meeting). Both models provide well-resolved size distributions of ice particles as a function of height, location and time for realistic high-latitude summertime conditions. In this paper we present calculations of the UV scattered brightness at multiple scattering angles as viewed by the AIM Cloud Imaging and Particle Size (CIPS) satellite experiment. These simulations are then considered discretely-sampled “data” for the scattering phase function, which are inverted using a technique (Lumpe et al, this meeting) to retrieve particle size information. We employ a T-matrix scattering code which applies to a wide range of non-sphericity of the ice particles, using the conventional idealized prolate/oblate spheroidal shape. This end-to-end test of the relatively new scattering phase function technique provides insight into both the retrieval accuracy and the information content in passive remote sensing of PMC.

  13. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  14. Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions

    NASA Astrophysics Data System (ADS)

    Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.

    2014-12-01

    The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.

  15. Assessing the failure of continuum formula for solid-solid drag force using discrete element method in large size ratios

    NASA Astrophysics Data System (ADS)

    Jalali, Payman; Hyppänen, Timo

    2017-06-01

    In loose or moderately-dense particle mixtures, the contact forces between particles due to successive collisions create average volumetric solid-solid drag force between different granular phases (of different particle sizes). The derivation of the mathematical formula for this drag force is based on the homogeneity of mixture within the calculational control volume. This assumption especially fails when the size ratio of particles grows to a large value of 10 or greater. The size-driven inhomogeneity is responsible to the deviation of intergranular force from the continuum formula. In this paper, we have implemented discrete element method (DEM) simulations to obtain the volumetric mean force exchanged between the granular phases with the size ratios greater than 10. First, the force is calculated directly from DEM averaged over a proper time window. Second, the continuum formula is applied to calculate the drag forces using the DEM quantities. We have shown the two volumetric forces are in good agreement as long as the homogeneity condition is maintained. However, the relative motion of larger particles in a cloud of finer particles imposes the inhomogeneous distribution of finer particles around the larger ones. We have presented correction factors to the volumetric force from continuum formula.

  16. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  17. Extracting Micro-Doppler Radar Signatures from Rotating Targets Using Fourier-Bessel Transform and Time-Frequency Analysis

    DTIC Science & Technology

    2014-10-16

    Time-Frequency analysis, Short-Time Fourier Transform, Wigner Ville Distribution, Fourier Bessel Transform, Fractional Fourier Transform. I...INTRODUCTION Most widely used time-frequency transforms are short-time Fourier Transform (STFT) and Wigner Ville distribution (WVD). In STFT, time and...frequency resolutions are limited by the size of window function used in calculating STFT. For mono-component signals, WVD gives the best time and frequency

  18. Characterization of magnetic nanoparticle by dynamic light scattering

    PubMed Central

    2013-01-01

    Here we provide a complete review on the use of dynamic light scattering (DLS) to study the size distribution and colloidal stability of magnetic nanoparticles (MNPs). The mathematical analysis involved in obtaining size information from the correlation function and the calculation of Z-average are introduced. Contributions from various variables, such as surface coating, size differences, and concentration of particles, are elaborated within the context of measurement data. Comparison with other sizing techniques, such as transmission electron microscopy and dark-field microscopy, revealed both the advantages and disadvantages of DLS in measuring the size of magnetic nanoparticles. The self-assembly process of MNP with anisotropic structure can also be monitored effectively by DLS. PMID:24011350

  19. Rock magnetic properties estimated from coercivity - blocking temperature diagram: application to recent volcanic rocks

    NASA Astrophysics Data System (ADS)

    Terada, T.; Sato, M.; Mochizuki, N.; Yamamoto, Y.; Tsunakawa, H.

    2013-12-01

    Magnetic properties of ferromagnetic minerals generally depend on their chemical composition, crystal structure, size, and shape. In the usual paleomagnetic study, we use a bulk sample which is the assemblage of magnetic minerals showing broad distributions of various magnetic properties. Microscopic and Curie-point observations of the bulk sample enable us to identify the constituent magnetic minerals, while other measurements, for example, stepwise thermal and/or alternating field demagnetizations (ThD, AFD) make it possible to estimate size, shape and domain state of the constituent magnetic grains. However, estimation based on stepwise demagnetizations has a limitation that magnetic grains with the same coercivity Hc (or blocking temperature Tb) can be identified as the single population even though they could have different size and shape. Dunlop and West (1969) carried out mapping of grain size and coercivity (Hc) using pTRM. However, it is considered that their mapping method is basically applicable to natural rocks containing only SD grains, since the grain sizes are estimated on the basis of the single domain theory (Neel, 1949). In addition, it is impossible to check thermal alteration due to laboratory heating in their experiment. In the present study we propose a new experimental method which makes it possible to estimate distribution of size and shape of magnetic minerals in a bulk sample. The present method is composed of simple procedures: (1) imparting ARM to a bulk sample, (2) ThD at a certain temperature, (3) stepwise AFD on the remaining ARM, (4) repeating the steps (1) ~ (3) with ThD at elevating temperatures up to the Curie temperature of the sample. After completion of the whole procedures, ARM spectra are calculated and mapped on the HC-Tb plane (hereafter called HC-Tb diagram). We analyze the Hc-Tb diagrams as follows: (1) For uniaxial SD populations, theoretical curve for a certain grain size (or shape anisotropy) is drawn on the Hc-Tb diagram. The curves are calculated using the single domain theory, since coercivity and blocking temperature of uniaxial SD grains can be expressed as a function of size and shape. (2) Boundary between SD and MD grains are calculated and drawn on the Hc-Tb diagram according to the theory by Butler and Banerjee (1975). (3) Theoretical predictions by (1) and (2) are compared with the obtained ARM spectra to estimate quantitive distribution of size, shape and domain state of magnetic grains in the sample. This mapping method has been applied to three samples: Hawaiian basaltic lava extruded in 1995, Ueno basaltic lava formed during Matsuyama chron, and Oshima basaltic lava extruded in 1986. We will discuss physical states of magnetic grains (size, shape, domain state, etc.) and their possible origins.

  20. Modeling growth and dissolution of inclusions during fusion welding of steels

    NASA Astrophysics Data System (ADS)

    Hong, Tao

    The characteristics of inclusions in the weld metals are critical factors to determine the structure, properties and performance of weldments. The research in the present thesis applied computational modeling to study inclusion behavior considering thermodynamics and kinetics of nucleation, growth and dissolution of inclusion along its trajectory calculated from the heat transfer and fluid flow model in the weld pool. The objective of this research is to predict the characteristics of inclusions, such as composition, size distribution, and number density in the weld metal from different welding parameters and steel compositions. To synthesize the knowledge of thermodynamics and kinetics of nucleation, growth and dissolution of inclusion in the liquid metal, a set of time-temperature-transformation (TTT) diagrams are constructed to represent the effects of time and temperature on the isothermal growth and dissolution behavior of fourteen types of individual inclusions. The non-isothermal behavior of growth and dissolution of inclusions is predicted from their isothermal behavior by constructing continuous-cooling-transformation (CCT) diagrams using Scheil additive rule. A well verified fluid flow and heat transfer model developed at Penn State is used to calculate the temperature and velocity fields in the weld pool for different welding processes. A turbulent model considering enhanced viscosity and thermal conductivity (k-ε model) is applied. The calculations show that there is vigorous circulation of metal in the weld pool. The heat transfer and fluid flow model helps to understand not only the fundamentals of the physical phenomena (luring welding, but also the basis to study the growth and dissolution of inclusions. The calculations of particle tracking of thousands of inclusions show that most inclusions undergo complex gyrations and thermal cycles in the weld pool. The inclusions experience both growth and dissolution during their lifetime. Thermal cycles of thousand of inclusions nucleated in the liquid region are tracked and their growth and dissolution are calculated to estimate the final size distribution and number density of inclusions statistically. The calculations show that welding conditions and weld metal compositions affect the inclusion characteristics significantly. Good agreement between the computed and the experimentally observed inclusion size distribution indicates that the inclusion behavior in the weld pool can be understood from the fundamentals of transport phenomena and transformation kinetics.

  1. Calculation Of Clinopyroxene And Olivine Growth Rates Using Plagioclase Residence Time

    NASA Astrophysics Data System (ADS)

    Kilinc, A. I.; Borell, A.; Leu, A.

    2012-12-01

    According to the Crystal Size Distribution theory (CSD) in a plot of logarithm of number of crystals of a given size range per unit volume [ln(n)], against crystal size [L] shows a straight line. Slope of that line is given by where is the crystal residence time and G is the crystal growth rate. Therefore if is known then G can be calculated. We used thin sections of the Kilauea basalt from Hawaii where olivine, clinopyroxene and plagioclase crystallized within a small temperature range, and the crystal growth rate of plagioclase is known. Assuming that crystal residence times of these three minerals are the same, we plotted ln(n) against L and using the slope and the known crystal growth rate of plagioclase we calculated the crystal growth rates of clinopyroxene and olivine. For the clinopyroxene growth rate we report 10-10.9cm/sec, which is in good agreement with Congdon's data of 10-10 cm/sec. We also calculated the growth rate of olivine is a basaltic melt as 10-8.5 cm/sec which is comparable to < 10-10 to 10-7 cm/sec given by Donaldson and Jambon.

  2. Strategy Guideline: HVAC Equipment Sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, A.

    The heating, ventilation, and air conditioning (HVAC) system is arguably the most complex system installed in a house and is a substantial component of the total house energy use. A right-sized HVAC system will provide the desired occupant comfort and will run efficiently. This Strategy Guideline discusses the information needed to initially select the equipment for a properly designed HVAC system. Right-sizing of an HVAC system involves the selection of equipment and the design of the air distribution system to meet the accurate predicted heating and cooling loads of the house. Right-sizing the HVAC system begins with an accurate understandingmore » of the heating and cooling loads on a space; however, a full HVAC design involves more than just the load estimate calculation - the load calculation is the first step of the iterative HVAC design procedure. This guide describes the equipment selection of a split system air conditioner and furnace for an example house in Chicago, IL as well as a heat pump system for an example house in Orlando, Florida. The required heating and cooling load information for the two example houses was developed in the Department of Energy Building America Strategy Guideline: Accurate Heating and Cooling Load Calculations.« less

  3. Direct measurements of temperature-dependent laser absorptivity of metal powders

    DOE PAGES

    Rubenchik, A.; Wu, S.; Mitchell, S.; ...

    2015-08-12

    Here, a compact system is developed to measure laser absorptivity for a variety of powder materials (metals, ceramics, etc.) with different powder size distributions and thicknesses. The measured results for several metal powders are presented. The results are consistent with those from ray tracing calculations.

  4. Direct measurements of temperature-dependent laser absorptivity of metal powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubenchik, A.; Wu, S.; Mitchell, S.

    Here, a compact system is developed to measure laser absorptivity for a variety of powder materials (metals, ceramics, etc.) with different powder size distributions and thicknesses. The measured results for several metal powders are presented. The results are consistent with those from ray tracing calculations.

  5. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  6. The origin of dispersion of magnetoresistance of a domain wall spin valve

    NASA Astrophysics Data System (ADS)

    Sato, Jun; Matsushita, Katsuyoshi; Imamura, Hiroshi

    2010-01-01

    We theoretically study the current-perpendicular-to-plane magnetoresistance of a domain wall confined in a nanocontact which is experimentally fabricated as current-confined-path (CCP) structure in a nano-oxide-layer (NOL). We solve the non-collinear spin diffusion equation by using the finite element method and calculate the MR ratio by evaluating the additional voltage drop due to the spin accumulation. We investigate the origin of dispersion of magnetoresistance by considering the effect of randomness of the size and distribution of the nanocontacts in the NOL. It is observed that the effect of randomness of the contact size is much larger than that of the contact distribution. Our results suggest that the origin of dispersion of magnetoresistance observed in the experiments is the randomness of the size of the nanocontacts in the NOL.

  7. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    PubMed

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  8. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Khain, A.; Simpson, S.; Johnson, D.; Li, X.; Remer, L.

    2003-01-01

    Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distribution parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembel (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), groupel and frozen drops/hall] Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region and in the mid-latitude using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Besides the initial differences in aerosol concentration, preliminary results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CCN case but has less cloud water mass aloft. Because the spectral-bim model explicitly calculates and allows for the examination of both the mass and number concentration of cpecies in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for the two cases. It is shown that since the low CCN case produces fever droplets, larger size develop due to greater condencational and collectional growth, leading to a broader size spectrum in comparison to the high CCN case.

  10. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Khain, A.; Simpson, S.; Johnson, D.; Li, X.; Remer, L.

    2003-01-01

    Cloud microphysics are inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e.,pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.A spectral-bin microphysical model is very expensive from a from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Besides the initial differences in aerosol concentration, preliminary results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CCN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size categor, a detailed analysis of the instantaneous size spectrum can be obtained for the two cases. It is shown that since the low CCN case produces fewer droplets, larger sized develop due to the greater condensational and collectional growth, leading to a broader size spectrum in comparison to the high CCN case.

  11. Radiation Field Forming for Industrial Electron Accelerators Using Rare-Earth Magnetic Materials

    NASA Astrophysics Data System (ADS)

    Ermakov, A. N.; Khankin, V. V.; Shvedunov, N. V.; Shvedunov, V. I.; Yurov, D. S.

    2016-09-01

    The article describes the radiation field forming system for industrial electron accelerators, which would have uniform distribution of linear charge density at the surface of an item being irradiated perpendicular to the direction of its motion. Its main element is non-linear quadrupole lens made with the use of rare-earth magnetic materials. The proposed system has a number of advantages over traditional beam scanning systems that use electromagnets, including easier product irradiation planning, lower instantaneous local dose rate, smaller size, lower cost. Provided are the calculation results for a 10 MeV industrial electron accelerator, as well as measurement results for current distribution in the prototype build based on calculations.

  12. Apical stress distribution under vertical compaction of gutta-percha and occlusal loads in canals with varying apical sizes: a three-dimensional finite element analysis.

    PubMed

    Yuan, K; Niu, C; Xie, Q; Jiang, W; Gao, L; Ma, R; Huang, Z

    2018-02-01

    To investigate and compare the effects of two apical canal instrumentation protocols on apical stress distribution at the root apex under vertical compaction of gutta-percha and occlusal loads using finite element analysis. Three finite element analysis models of a mandibular first premolar were reconstructed: an original canal model, a size 35, .04 taper apical canal enlargement model and a Lightspeed size 60 apical canal enlargement model. A 15 N compaction force was applied vertically to the gutta-percha 5 mm from the apex. A 175 N occlusal load in two directions (vertical and 45° to the longitudinal axis of the tooth) was simulated. Stresses in the apical 2 mm of the root were calculated and compared among the three models. Under vertical compaction, stresses in the apical canal instrumented by Lightspeed size 60 (maximal 3.3 MPa) were higher than that of the size 35, .04 taper model (maximal 1.3 MPa). In the case of the two occlusal forces, the Lightspeed size 60 apical enlargement was associated with the greatest stress distribution in the apical region. The greatest stress and the most obvious stress difference between the models appeared at the tip of the root when occlusal and vertical compaction loads were applied. Apical enlargement caused stress distribution changes in the apical region of roots. The larger apical size led to higher stress concentration at the root apex. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  13. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  14. Assessment of analytical techniques for predicting solid propellant exhaust plumes

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.

  15. Ejecta Production and Properties

    NASA Astrophysics Data System (ADS)

    Williams, Robin

    2017-06-01

    The interaction of an internal shock with the free surface of a dense material leads to the production of jets of particulate material from the surface into its environment. Understanding the processes which control the production of these jets -- both their occurrence, and properties such as the mass, velocity, and particle size distribution of material injected -- has been a topic of active research at AWE for over 50 years. I will discuss the effect of material physics, such as strength and spall, on the production of ejecta, drawing on experimental history and recent calculations, and consider the processes which determine the distribution of particle sizes which result as ejecta jets break up. British Crown Owned Copyright 2017/AWE.

  16. Time Dependent Density Functional Theory Calculations of Large Compact PAH Cations: Implications for the Diffuse Interstellar Bands

    NASA Technical Reports Server (NTRS)

    Weisman, Jennifer L.; Lee, Timothy J.; Salama, Farid; Gordon-Head, Martin; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We investigate the electronic absorption spectra of several maximally pericondensed polycyclic aromatic hydrocarbon radical cations with time dependent density functional theory calculations. We find interesting trends in the vertical excitation energies and oscillator strengths for this series containing pyrene through circumcoronene, the largest species containing more than 50 carbon atoms. We discuss the implications of these new results for the size and structure distribution of the diffuse interstellar band carriers.

  17. Discussion about the use of the volume specific surface area (VSSA) as a criterion to identify nanomaterials according to the EU definition. Part two: experimental approach.

    PubMed

    Lecloux, André J; Atluri, Rambabu; Kolen'ko, Yury V; Deepak, Francis Leonard

    2017-10-12

    The first part of this study was dedicated to the modelling of the influence of particle shape, porosity and particle size distribution on the volume specific surface area (VSSA) values in order to check the applicability of this concept to the identification of nanomaterials according to the European Commission Recommendation. In this second part, experimental VSSA values are obtained for various samples from nitrogen adsorption isotherms and these values were used as a screening tool to identify and classify nanomaterials. These identification results are compared to the identification based on the 50% of particles with a size below 100 nm criterion applied to the experimental particle size distributions obtained by analysis of electron microscopy images on the same materials. It is concluded that the experimental VSSA values are able to identify nanomaterials, without false negative identification, if they have a mono-modal particle size, if the adsorption data cover the relative pressure range from 0.001 to 0.65 and if a simple, qualitative image of the particles by transmission or scanning electron microscopy is available to define their shape. The experimental conditions to obtain reliable adsorption data as well as the way to analyze the adsorption isotherms are described and discussed in some detail in order to help the reader in using the experimental VSSA criterion. To obtain the experimental VSSA values, the BET surface area can be used for non-porous particles, but for porous, nanostructured or coated nanoparticles, only the external surface of the particles, obtained by a modified t-plot approach, should be considered to determine the experimental VSSA and to avoid false positive identification of nanomaterials, only the external surface area being related to the particle size. Finally, the availability of experimental VSSA values together with particle size distributions obtained by electron microscopy gave the opportunity to check the representativeness of the two models described in the first part of this study. They were also used to calculate the VSSA values and these calculated values were compared to the experimental results. For narrow particle size distributions, both models give similar VSSA values quite comparable to the experimental ones. But when the particle size distribution broadens or is of multi-bimodal shape, as theoretically predicted, one model leads to VSSA values higher than the experimental ones while the other most often leads to VSSA values lower than the experimental ones. The experimental VSSA approach then appears as a reliable, simple screening tool to identify nano and non-nano-materials. The modelling approach cannot be used as a formal identification tool but could be useful to screen for potential effects of shape, polydispersity and size, for example to compare various possible nanoforms.

  18. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.

  19. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  20. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  1. Distributed Agent-Based Networks in Support of Advanced Marine Corps Command and Control Concept

    DTIC Science & Technology

    2012-09-01

    clusters of managers and clients that form a hierarchical management framework (Figure 14). However, since it is SNMP-based, due to the size and...that are much less computationally intensive than other proposed approaches such as multivariate calculations of Pareto boundaries (Bordetsky and

  2. Angle-Resolved Photoemission of Solvated Electrons in Sodium-Doped Clusters.

    PubMed

    West, Adam H C; Yoder, Bruce L; Luckhaus, David; Saak, Clara-Magdalena; Doppelbauer, Maximilian; Signorell, Ruth

    2015-04-16

    Angle-resolved photoelectron spectroscopy of the unpaired electron in sodium-doped water, methanol, ammonia, and dimethyl ether clusters is presented. The experimental observations and the complementary calculations are consistent with surface electrons for the cluster size range studied. Evidence against internally solvated electrons is provided by the photoelectron angular distribution. The trends in the ionization energies seem to be mainly determined by the degree of hydrogen bonding in the solvent and the solvation of the ion core. The onset ionization energies of water and methanol clusters do not level off at small cluster sizes but decrease slightly with increasing cluster size.

  3. The Effect of Rain on Air-Water Gas Exchange

    NASA Technical Reports Server (NTRS)

    Ho, David T.; Bliven, Larry F.; Wanninkhof, Rik; Schlosser, Peter

    1997-01-01

    The relationship between gas transfer velocity and rain rate was investigated at NASA's Rain-Sea Interaction Facility (RSIF) using several SF, evasion experiments. During each experiment, a water tank below the rain simulator was supersaturated with SF6, a synthetic gas, and the gas transfer velocities were calculated from the measured decrease in SF6 concentration with time. The results from experiments with IS different rain rates (7 to 10 mm/h) and 1 of 2 drop sizes (2.8 or 4.2 mm diameter) confirm a significant and systematic enhancement of air-water gas exchange by rainfall. The gas transfer velocities derived from our experiment were related to the kinetic energy flux calculated from the rain rate and drop size. The relationship obtained for mono-dropsize rain at the RSIF was extrapolated to natural rain using the kinetic energy flux of natural rain calculated from the Marshall-Palmer raindrop size distribution. Results of laboratory experiments at RSIF were compared to field observations made during a tropical rainstorm in Miami, Florida and show good agreement between laboratory and field data.

  4. Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials

    PubMed Central

    Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda

    2016-01-01

    In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797

  5. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  6. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  7. The effect of dispersed Petrobaltic oil droplet size on photosynthetically active radiation in marine environment.

    PubMed

    Haule, Kamila; Freda, Włodzimierz

    2016-04-01

    Oil pollution in seawater, primarily visible on sea surface, becomes dispersed as an effect of wave mixing as well as chemical dispersant treatment, and forms spherical oil droplets. In this study, we examined the influence of oil droplet size of highly dispersed Petrobaltic crude on the underwater visible light flux and the inherent optical properties (IOPs) of seawater, including absorption, scattering, backscattering and attenuation coefficients. On the basis of measured data and Mie theory, we calculated the IOPs of dispersed Petrobaltic crude oil in constant concentration, but different log-normal size distributions. We also performed a radiative transfer analysis, in order to evaluate the influence on the downwelling irradiance Ed, remote sensing reflectance Rrs and diffuse reflectance R, using in situ data from the Baltic Sea. We found that during dispersion, there occurs a boundary size distribution characterized by a peak diameter d0  = 0.3 μm causing a maximum E d increase of 40% within 0.5-m depth, and the maximum Ed decrease of 100% at depths below 5 m. Moreover, we showed that the impact of size distribution on the "blue to green" ratios of Rrs and R varies from 24% increase to 27% decrease at the same crude oil concentration.

  8. Size effect in Quincke rotation: a numerical study.

    PubMed

    Peters, F; Lobry, L; Khayari, A; Lemaire, E

    2009-05-21

    This paper deals with the Quincke rotation of small insulating particles. This dc electrorotation of insulating objects immersed in a slightly conducting liquid is usually explained by looking at the action of the free charges present in the liquid. Under the effect of the dc electric field, the charges accumulate at the surface of the insulating particle which, in turn, acquires a dipole moment in the direction opposite to that of the field and begins to rotate in order to flip its dipole moment. In the classical Quincke model, the charge distribution around the rotor is supposed to be purely superficial. A consequence of this assumption is that the angular velocity does not depend on the rotor size. Nevertheless, this hypothesis holds only if the rotor size is much larger than the characteristic ion layer thickness around the particle. In the opposite case, we show thanks to numerical calculations that the bulk charge distribution has to be accounted for to predict the electromechanical behavior of the rotor. We consider the case of an infinite insulating cylinder whose axis is perpendicular to the dc electric field. We use the finite element method to solve the conservation equations for the positive and the negative ions coupled with Navier-Stokes and Poisson equations. Doing so, we compute the bulk charge distribution and the velocity field in the liquid surrounding the cylinder. For sufficiently small cylinders, we show that the smaller the cylinder is, the smaller its angular velocity is when submitted to a dc electric field. This size effect is shown to originate both in ion diffusion and electromigration in the charge layer. At last, we propose a simple analytical model which allows calculating the angular velocity of the rotor when electromigration is present but weak and diffusion can be neglected.

  9. Size effect in Quincke rotation: A numerical study

    NASA Astrophysics Data System (ADS)

    Peters, F.; Lobry, L.; Khayari, A.; Lemaire, E.

    2009-05-01

    This paper deals with the Quincke rotation of small insulating particles. This dc electrorotation of insulating objects immersed in a slightly conducting liquid is usually explained by looking at the action of the free charges present in the liquid. Under the effect of the dc electric field, the charges accumulate at the surface of the insulating particle which, in turn, acquires a dipole moment in the direction opposite to that of the field and begins to rotate in order to flip its dipole moment. In the classical Quincke model, the charge distribution around the rotor is supposed to be purely superficial. A consequence of this assumption is that the angular velocity does not depend on the rotor size. Nevertheless, this hypothesis holds only if the rotor size is much larger than the characteristic ion layer thickness around the particle. In the opposite case, we show thanks to numerical calculations that the bulk charge distribution has to be accounted for to predict the electromechanical behavior of the rotor. We consider the case of an infinite insulating cylinder whose axis is perpendicular to the dc electric field. We use the finite element method to solve the conservation equations for the positive and the negative ions coupled with Navier-Stokes and Poisson equations. Doing so, we compute the bulk charge distribution and the velocity field in the liquid surrounding the cylinder. For sufficiently small cylinders, we show that the smaller the cylinder is, the smaller its angular velocity is when submitted to a dc electric field. This size effect is shown to originate both in ion diffusion and electromigration in the charge layer. At last, we propose a simple analytical model which allows calculating the angular velocity of the rotor when electromigration is present but weak and diffusion can be neglected.

  10. Black carbon's contribution to aerosol absorption optical depth over S. Korea

    NASA Astrophysics Data System (ADS)

    Lamb, K.; Perring, A. E.; Beyersdorf, A. J.; Anderson, B. E.; Segal-Rosenhaimer, M.; Redemann, J.; Holben, B. N.; Schwarz, J. P.

    2017-12-01

    Aerosol absorption optical depth (AAOD) monitored by ground-based sites (AERONET, SKYNET, etc.) is used to constrain climate radiative forcing from black carbon (BC) and other absorbing aerosols in global models, but few validation studies between in situ aerosol measurements and ground-based AAOD exist. AAOD is affected by aerosol size distributions, composition, mixing state, and morphology. Megacities provide appealing test cases for this type of study due to their association with very high concentrations of anthropogenic aerosols. During the KORUS-AQ campaign in S. Korea, which took place in late spring and early summer of 2016, in situ aircraft measurements over the Seoul Metropolitan Area and Taehwa Research Forest (downwind of Seoul) were repeated three times per flight over a 6 week period, providing significant temporal coverage of vertically resolved aerosol properties influenced by different meteorological conditions and sources. Measurements aboard the NASA DC-8 by the NOAA Humidified Dual Single Particle Soot Photometers (HD-SP2) quantified BC mass, size distributions, mixing state, and the hygroscopicity of BC containing aerosols. The in situ BC mass vertical profiles are combined with estimated absorption enhancement calculated from observed optical size and hygroscopicity using Mie theory, and then integrated over the depth of the profile to calculate BC's contribution to AAOD. Along with bulk aerosol size distributions and hygroscopicity, bulk absorbing aerosol optical properties, and on-board sky radiance measurements, these measurements are compared with ground-based AERONET site measurements of AAOD to evaluate closure between in situ vertical profiles of BC and AAOD measurements. This study will provide constraints on the relative importance of BC (including lensing and hygroscopicity effects) and non-BC components to AAOD over S. Korea.

  11. Space Shuttle ice nuclei

    NASA Astrophysics Data System (ADS)

    Turco, R. P.; Toon, O. B.; Whitten, R. C.; Cicerone, R. J.

    1982-08-01

    Estimates are made showing that, as a consequence of rocket activity in the earth's upper atmosphere in the Shuttle era, average ice nuclei concentrations in the upper atmosphere could increase by a factor of two, and that an aluminum dust layer weighing up to 1000 tons might eventually form in the lower atmosphere. The concentrations of Space Shuttle ice nuclei (SSIN) in the upper troposphere and lower stratosphere were estimated by taking into account the composition of the particles, the extent of surface poisoning, and the size of the particles. Calculated stratospheric size distributions at 20 km with Space Shuttle particulate injection, calculated SSIN concentrations at 10 and 20 km altitude corresponding to different water vapor/ice supersaturations, and predicted SSIN concentrations in the lower stratosphere and upper troposphere are shown.

  12. Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization Case Study

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-09-01

    Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.

  13. Size distributions and failure initiation of submarine and subaerial landslides

    USGS Publications Warehouse

    ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area affected by subaerial landslides is comparable to that calculated by slope stability analysis for submarine landslides. The area distribution of subaerial landslides from a single event may be determined by the size distribution of the morphology of the affected area, not by the initiation process. ?? 2009 Elsevier B.V.

  14. Dependence of the forward light scattering on the refractive index of particles

    NASA Astrophysics Data System (ADS)

    Guo, Lufang; Shen, Jianqi

    2018-05-01

    In particle sizing technique based on forward light scattering, the scattered light signal (SLS) is closely related to the relative refractive index (RRI) of the particles to the surrounding, especially when the particles are transparent (or weakly absorbent) and the particles are small in size. The interference between the diffraction (Diff) and the multiple internal reflections (MIR) of scattered light can lead to the oscillation of the SLS on RRI and the abnormal intervals, especially for narrowly-distributed small particle systems. This makes the inverse problem more difficult. In order to improve the inverse results, Tikhonov regularization algorithm with B-spline functions is proposed, in which the matrix element is calculated for a range of particle sizes instead using the mean particle diameter of size fractions. In this way, the influence of abnormal intervals on the inverse results can be eliminated. In addition, for measurements on narrowly distributed small particles, it is suggested to detect the SLS in a wider scattering angle to include more information.

  15. Dislocation, crystallite size distribution and lattice strain of magnesium oxide nanoparticles

    NASA Astrophysics Data System (ADS)

    Sutapa, I. W.; Wahid Wahab, Abdul; Taba, P.; Nafie, N. L.

    2018-03-01

    The oxide of magnesium nanoparticles synthesized using sol-gel method and analysis of the structural properties was conducted. The functional groups of nanoparticles has been analysed by Fourier Transform Infrared Spectroscopy (FT-IR). Dislocations, average size of crystal, strain, stress, the energy density of crystal, crystallite size distribution and morphologies of the crystals were determined based on X-ray diffraction profile analysis. The morphological of the crystal was analysed based on the image resulted from SEM analysis. The crystallite size distribution was calculated with the contention that the particle size has a normal logarithmic form. The most orientations of crystal were determined based on the textural crystal from diffraction data of X-ray diffraction profile analysis. FT-IR results showed the stretching vibration mode of the Mg-O-Mg in the range of 400.11-525 cm-1 as a broad band. The average size crystal of nanoparticles resulted is 9.21 mm with dislocation value of crystal is 0.012 nm-2. The strains, stress, the energy density of crystal are 1.5 x 10-4 37.31 MPa; 0.72 MPa respectively. The highest texture coefficient value of the crystal is 0.98. This result is supported by morphological analysis using SEM which shows most of the regular cubic-shaped crystals. The synthesis method is suitable for simple and cost-effective synthesis model of MgO nanoparticles.

  16. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  17. Aircraft microwave observations and simulations of deep convection from 18 to 183 GHz. II - Model results

    NASA Technical Reports Server (NTRS)

    Yeh, Hwa-Young M.; Prasad, N.; Mack, Robert A.; Adler, Robert F.

    1990-01-01

    In this June 29, 1986 case study, a radiative transfer model is used to simulate the aircraft multichannel microwave brightness temperatures presented in the Adler et al. (1990) paper and to study the convective storm structure. Ground-based radar data are used to derive hydrometeor profiles of the storm, based on which the microwave upwelling brightness temperatures are calculated. Various vertical hydrometeor phase profiles and the Marshall and Palmer (M-P, 1948) and Sekhon and Srivastava (S-S, 1970) ice particle size distributions are experimented in the model. The results are compared with the aircraft radiometric data. The comparison reveals that the M-P distribution well represents the ice particle size distribution, especially in the upper tropospheric portion of the cloud; the S-S distribution appears to better simulate the ice particle size at the lower portion of the cloud, which has a greater effect on the low-frequency microwave upwelling brightness temperatures; and that, in deep convective regions, significant supercooled liquid water (about 0.5 g/cu m) may be present up to the -30 C layer, while in less convective areas, frozen hydrometeors are predominant above -10 C level.

  18. On the size dependence of the scattering greenhouse effect of CO2 ice particles

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.; Patzer, A. B. C.; Rauer, H.

    2011-10-01

    In this contribution we study the potential greenhouse effect due to scattering of CO2 ice clouds for atmospheric conditions of terrestrial extrasolar planets. Therefore, we calculate the scattering and absorption properties of CO2 ice particles using Mie theory for assumed particle size distributions with different effective radii and particle densities to determine the scattering and absorption characteristics of such clouds. Implications especially in view of a potential greenhouse warming of the planetary surface are discussed.

  19. Influence of mantle viscosity structure and mineral grain size on fluid migration pathways in the mantle wedge.

    NASA Astrophysics Data System (ADS)

    Cerpa, N. G.; Wada, I.; Wilson, C. R.; Spiegelman, M. W.

    2016-12-01

    We develop a 2D numerical porous flow model that incorporates both grain size distribution and matrix compaction to explore the fluid migration (FM) pathways in the mantle wedge. Melt generation for arc volcanism is thought to be triggered by slab-derived fluids that migrate into the hot overlying mantle and reduce its melting temperature. While the narrow location of the arcs relative to the top of the slab ( 100±30 km) is a robust observation, the release of fluids is predicted to occur over a wide range of depth. Reconciling such observations and predictions remains a challenge for the geodynamic community. Fluid transport by porous flow depends on the permeability of the medium which in turn depends on fluid fraction and mineral grain size. The grain size distribution in the mantle wedge predicted by laboratory derived laws was found to be a possible mechanism to focusing of fluids beneath the arcs [Wada and Behn, 2015]. The viscous resistance of the matrix to the volumetric strain generates compaction pressure that affects fluid flow and can also focus fluids towards the arc [Wilson et al, 2014]. We thus have developed a 2D one-way coupled Darcy's-Stokes flow model (solid flow independent of fluid flow) for the mantle wedge that combines both effects. For the solid flow calculation, we use a kinematic-dynamic approach where the system is driven by the prescribed slab velocity. The solid rheology accounts for both dislocation and diffusion creep and we calculate the grain size distribution following Wada and Behn [2015]. In our fluid flow model, the permeability of the medium is grain size dependent and the matrix bulk viscosity depends on solid shear viscosity and fluid fraction. The fluid influx from the slab is imposed as a boundary condition at the base of the mantle wedge. We solve the discretized governing equations using the software package TerraFERMA. Applying a range of model parameter values, including slab age, slab dip, subduction rate, and fluid influx, we quantify the combined effects of grain size and compaction on fluid flow paths.

  20. Adaptive grid generation in a patient-specific cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce computational time for patient-specific hemodynamics simulations, which are used to help assess the likelihood of aneurysm rupture using CFD calculated flow patterns.

  1. Impact of geometrical properties on permeability and fluid phase distribution in porous media

    NASA Astrophysics Data System (ADS)

    Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.

    2008-09-01

    To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.

  2. Effect of particle size distribution of maize and soybean meal on the precaecal amino acid digestibility in broiler chickens.

    PubMed

    Siegert, W; Ganzer, C; Kluth, H; Rodehutscord, M

    2018-02-01

    1. Herein, it was investigated whether different particle size distributions of feed ingredients achieved by grinding through a 2- or 3-mm grid would have an effect on precaecal (pc) amino acid (AA) digestibility. Maize and soybean meal were used as the test ingredients. 2. Maize and soybean meal was ground with grid sizes of 2 or 3 mm. Nine diets were prepared. The basal diet contained 500 g/kg of maize starch. The other experimental diets contained maize or soybean meal samples at concentrations of 250 and 500, and 150 and 300 g/kg, respectively, instead of maize starch. Each diet was tested using 6 replicate groups of 10 birds each. The regression approach was applied to calculate the pc AA digestibility of the test ingredients. 3. The reduction of the grid size from 3 to 2 mm reduced the average particle size of both maize and soybean meal, mainly by reducing the proportion of coarse particles. Reducing the grid size significantly (P < 0.050) increased the pc digestibility of all AA in the soybean meal. In maize, reducing the grid size decreased the pc digestibility of all AA numerically, but not significantly (P > 0.050). The mean numerical differences in pc AA digestibility between the grid sizes were 0.045 and 0.055 in maize and soybean meal, respectively. 4. Future studies investigating the pc AA digestibility should specify the particle size distribution and should investigate the test ingredients ground similarly for practical applications.

  3. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    NASA Astrophysics Data System (ADS)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  4. Dynamics of photoexcited Ba+ cations in 4He nanodroplets

    NASA Astrophysics Data System (ADS)

    Leal, Antonio; Zhang, Xiaohang; Barranco, Manuel; Cargnoni, Fausto; Hernando, Alberto; Mateo, David; Mella, Massimo; Drabbels, Marcel; Pi, Martí

    2016-03-01

    We present a joint experimental and theoretical study on the desolvation of Ba+ cations in 4He nanodroplets excited via the 6p ← 6s transition. The experiments reveal an efficient desolvation process yielding mainly bare Ba+ cations and Ba+Hen exciplexes with n = 1 and 2. The speed distributions of the ions are well described by Maxwell-Boltzmann distributions with temperatures ranging from 60 to 178 K depending on the excitation frequency and Ba+ Hen exciplex size. These results have been analyzed by calculations based on a time-dependent density functional description for the helium droplet combined with classical dynamics for the Ba+. In agreement with experiment, the calculations reveal the dynamical formation of exciplexes following excitation of the Ba+ cation. In contrast to experimental observation, the calculations do not reveal desolvation of excited Ba+ cations or exciplexes, even when relaxation pathways to lower lying states are included.

  5. Mobility particle size spectrometers: harmonization of technical standards and data structure to facilitate high quality long-term observations of atmospheric particle number size distributions

    NASA Astrophysics Data System (ADS)

    Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Williams, P.; Roldin, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.; Löschau, G.; Bastian, S.

    2012-03-01

    Mobility particle size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide range of applications in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards and guidelines with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. Technical standards were developed for a minimum requirement of mobility size spectrometry to perform long-term atmospheric aerosol measurements. Technical recommendations include continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyzer. We compared commercial and custom-made inversion routines to calculate the particle number size distributions from the measured electrical mobility distribution. All inversion routines are comparable within few per cent uncertainty for a given set of raw data. Furthermore, this work summarizes the results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research) and ACTRIS (Aerosols, Clouds, and Trace gases Research InfraStructure Network) to determine present uncertainties especially of custom-built mobility particle size spectrometers. Under controlled laboratory conditions, the particle number size distributions from 20 to 200 nm determined by mobility particle size spectrometers of different design are within an uncertainty range of around ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. For particles larger than 200 nm, the uncertainty range increased to 30%, which could not be explained. The network reference mobility spectrometers with identical design agreed within ±4% in the peak particle number concentration when all settings were done carefully. The consistency of these reference instruments to the total particle number concentration was demonstrated to be less than 5%. Additionally, a new data structure for particle number size distributions was introduced to store and disseminate the data at EMEP (European Monitoring and Evaluation Program). This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.

  6. Sedimentology and geochemistry of mud volcanoes in the Anaximander Mountain Region from the Eastern Mediterranean Sea.

    PubMed

    Talas, Ezgi; Duman, Muhammet; Küçüksezgin, Filiz; Brennan, Michael L; Raineault, Nicole A

    2015-06-15

    Investigations carried out on surface sediments collected from the Anaximander mud volcanoes in the Eastern Mediterranean Sea to determine sedimentary and geochemical properties. The sediment grain size distribution and geochemical contents were determined by grain size analysis, organic carbon, carbonate contents and element analysis. The results of element contents were compared to background levels of Earth's crust. The factors that affect element distribution in sediments were calculated by the nine push core samples taken from the surface of mud volcanoes by the E/V Nautilus. The grain size of the samples varies from sand to sandy silt. Enrichment and Contamination factor analysis showed that these analyses can also be used to evaluate of deep sea environmental and source parameters. It is concluded that the biological and cold seep effects are the main drivers of surface sediment characteristics from the Anaximander mud volcanoes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Sequential associative memory with nonuniformity of the layer sizes.

    PubMed

    Teramae, Jun-Nosuke; Fukai, Tomoki

    2007-01-01

    Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.

  8. Abundance, size and polymer composition of marine microplastics ≥10μm in the Atlantic Ocean and their modelled vertical distribution.

    PubMed

    Enders, Kristina; Lenz, Robin; Stedmon, Colin A; Nielsen, Torkel G

    2015-11-15

    We studied abundance, size and polymer type of microplastic down to 10μm along a transect from the European Coast to the North Atlantic Subtropical Gyre (NASG) using an underway intake filtration technique and Raman micro-spectrometry. Concentrations ranged from 13 to 501itemsm(-3). Highest concentrations were observed at the European coast, decreasing towards mid-Atlantic waters but elevated in the western NASG. We observed highest numbers among particles in the 10-20μm size fraction, whereas the total volume was highest in the 50-80μm range. Based on a numerical model size-dependent depth profiles of polyethylene microspheres in a range from 10-1000μm were calculated and show a strong dispersal throughout the surface mixed layer for sizes smaller than 200μm. From model and field study results we conclude that small microplastic is ubiquitously distributed over the ocean surface layer and has a lower residence time than larger plastic debris in this compartment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Calculation of broadband time histories of ground motion: Comparison of methods and validation using strong-ground motion from the 1994 Northridge earthquake

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.; Larsen, S.

    1999-01-01

    This article compares techniques for calculating broadband time histories of ground motion in the near field of a finite fault by comparing synthetics with the strong-motion data set for the 1994 Northridge earthquake. Based on this comparison, a preferred methodology is presented. Ground-motion-simulation techniques are divided into two general methods: kinematic- and composite-fault models. Green's functions of three types are evaluated: stochastic, empirical, and theoretical. A hybrid scheme is found to give the best fit to the Northridge data. Low frequencies ( 1 Hz) are calculated using a composite-fault model with a fractal subevent size distribution and stochastic, bandlimited, white-noise Green's functions. At frequencies below 1 Hz, theoretical elastic-wave-propagation synthetics introduce proper seismic-phase arrivals of body waves and surface waves. The 3D velocity structure more accurately reproduces record durations for the deep sedimentary basin structures found in the Los Angeles region. At frequencies above 1 Hz, scattering effects become important and wave propagation is more accurately represented by stochastic Green's functions. A fractal subevent size distribution for the composite fault model ensures an ??-2 spectral shape over the entire frequency band considered (0.1-20 Hz).

  10. Simulation of emission and propagation of coherent synchrotron radiation wave fronts using the methods of wave optics

    NASA Astrophysics Data System (ADS)

    Chubar, O.

    2006-09-01

    The paper describes methods of efficient calculation of spontaneous synchrotron radiation (SR) by relativistic electrons in storage rings, and propagation of this radiation through optical elements and drift spaces of beamlines, using the principles of wave optics. In addition to the SR from one electron, incoherent and coherent synchrotron radiation (CSR) emitted by electron bunches is treated. CPU-efficient CSR calculation method taking into account 6D phase space distribution of electrons in a bunch is proposed. The properties of CSR emitted by electron bunches with small longitudinal and large transverse size are studied numerically (such situation can be realized in storage rings e.g. by transverse deflection of the electron bunches in special RF cavities). It is shown that if the transverse size of a bunch is much larger than the diffraction limit for single-electron SR at a given wavelength - it affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and the longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR.

  11. A fast integrated mobility spectrometer for rapid measurement of sub-micrometer aerosol size distribution, Part II: Experimental characterization

    DOE PAGES

    Wang, Jian; Pikridas, Michael; Pinterich, Tamara; ...

    2017-06-08

    A Fast Integrated Mobility Spectrometer (FIMS) with a wide dynamic size range has been developed for rapid aerosol size distribution measurements. The design and model evaluation of the FIMS are presented in the preceding paper (Paper I), and this paper focuses on the experimental characterization of the FIMS. Monodisperse aerosol with diameter ranging from 8 to 600 nm was generated using Differential Mobility Analyzer (DMA), and was measured by the FIMS in parallel with a Condensation Particle Counter (CPC). The mean particle diameter measured by the FIMS is in good agreement with the DMA centroid diameter. Comparison of the particlemore » concentrations measured by the FIMS and CPC indicates the FIMS detection efficiency is essentially 100% for particles with diameters of 8 nm or larger. For particles smaller than 20 nm or larger than 200 nm, FIMS transfer function and resolution can be well represented by the calculated ones based on simulated particle trajectories in the FIMS. For particles between 20 and 200 nm, the FIMS transfer function is boarder than the calculated, likely due to non-ideality of the electric field, including edge effects near the end of the electrode, which are not represented by the 2-D electric field used to simulate particle trajectories.« less

  12. A fast integrated mobility spectrometer for rapid measurement of sub-micrometer aerosol size distribution, Part II: Experimental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jian; Pikridas, Michael; Pinterich, Tamara

    A Fast Integrated Mobility Spectrometer (FIMS) with a wide dynamic size range has been developed for rapid aerosol size distribution measurements. The design and model evaluation of the FIMS are presented in the preceding paper (Paper I), and this paper focuses on the experimental characterization of the FIMS. Monodisperse aerosol with diameter ranging from 8 to 600 nm was generated using Differential Mobility Analyzer (DMA), and was measured by the FIMS in parallel with a Condensation Particle Counter (CPC). The mean particle diameter measured by the FIMS is in good agreement with the DMA centroid diameter. Comparison of the particlemore » concentrations measured by the FIMS and CPC indicates the FIMS detection efficiency is essentially 100% for particles with diameters of 8 nm or larger. For particles smaller than 20 nm or larger than 200 nm, FIMS transfer function and resolution can be well represented by the calculated ones based on simulated particle trajectories in the FIMS. For particles between 20 and 200 nm, the FIMS transfer function is boarder than the calculated, likely due to non-ideality of the electric field, including edge effects near the end of the electrode, which are not represented by the 2-D electric field used to simulate particle trajectories.« less

  13. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  14. Connecting Aerosol Size Distributions at Three Arctic Stations

    NASA Astrophysics Data System (ADS)

    Freud, E.; Krejci, R.; Tunved, P.; Barrie, L. A.

    2015-12-01

    Aerosols play an important role in Earth's energy balance mainly through interactions with solar radiation and cloud processes. There is a distinct annual cycle of arctic aerosols, with greatest mass concentrations in the spring and lowest in summer due to effective wet removal processes - allowing for new particles formation events to take place. Little is known about the spatial extent of these events as no previous studies have directly compared and linked aerosol measurements from different arctic stations during the same times. Although the arctic stations are hardly affected by local pollution, it is normally assumed that their aerosol measurements are indicative of a rather large area. It is, however, not clear if that assumption holds all the time, and how large may that area be. In this study, three different datasets of aerosol size distributions from Mt. Zeppelin in Svalbard, Station Nord in northern Greenland and Alert in the Canadian arctic, are analyzed for the measurement period of 2012-2013. All stations are 500 to 1000 km from each other, and the travel time from one station to the other is typically between 2 to 5 days. The meteorological parameters along the calculated trajectories are analyzed in order to estimate their role in the modification of the aerosol size distribution while the air is traveling from one field station to another. In addition, the exposure of the sampled air to open waters vs. frozen sea is assessed, due to the different fluxes of heat, moisture, gases and particles, that are expected to affect the aerosol size distribution. The results show that the general characteristics of the aerosol size distributions and their annual variation are not very different in all three stations, with Alert and Station Nord being more similar. This is more pronounced when looking into the cases for which the trajectory calculations indicated that the air traveled from one of the latter stations to the other. The probable causes for the measurements at Mt. Zeppelin to stand out are the greater exposure to ice-free water all year round. In addition, the air sampled at Mt. Zeppelin is sometimes decoupled from the air at sea level. This results in a greater potential contribution of long-range transport to the aerosols that are measured there, compared to the other low-altitude stations.

  15. Determination of deuterium–tritium critical burn-up parameter by four temperature theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazirzadeh, M.; Ghasemizad, A.; Khanbabei, B.

    Conditions for thermonuclear burn-up of an equimolar mixture of deuterium-tritium in non-equilibrium plasma have been investigated by four temperature theory. The photon distribution shape significantly affects the nature of thermonuclear burn. In three temperature model, the photon distribution is Planckian but in four temperature theory the photon distribution has a pure Planck form below a certain cut-off energy and then for photon energy above this cut-off energy makes a transition to Bose-Einstein distribution with a finite chemical potential. The objective was to develop four temperature theory in a plasma to calculate the critical burn up parameter which depends upon initialmore » density, the plasma components initial temperatures, and hot spot size. All the obtained results from four temperature theory model are compared with 3 temperature model. It is shown that the values of critical burn-up parameter calculated by four temperature theory are smaller than those of three temperature model.« less

  16. Long-range Ising model for credit portfolios with heterogeneous credit exposures

    NASA Astrophysics Data System (ADS)

    Kato, Kensuke

    2016-11-01

    We propose the finite-size long-range Ising model as a model for heterogeneous credit portfolios held by a financial institution in the view of econophysics. The model expresses the heterogeneity of the default probability and the default correlation by dividing a credit portfolio into multiple sectors characterized by credit rating and industry. The model also expresses the heterogeneity of the credit exposure, which is difficult to evaluate analytically, by applying the replica exchange Monte Carlo method to numerically calculate the loss distribution. To analyze the characteristics of the loss distribution for credit portfolios with heterogeneous credit exposures, we apply this model to various credit portfolios and evaluate credit risk. As a result, we show that the tail of the loss distribution calculated by this model has characteristics that are different from the tail of the loss distribution of the standard models used in credit risk modeling. We also show that there is a possibility of different evaluations of credit risk according to the pattern of heterogeneity.

  17. Zipf's law in city size from a resource utilization model.

    PubMed

    Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S; Chakrabarti, Bikas K

    2014-10-01

    We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.

  18. Zipf's law in city size from a resource utilization model

    NASA Astrophysics Data System (ADS)

    Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S.; Chakrabarti, Bikas K.

    2014-10-01

    We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.

  19. Cation distribution of Ni-Zn-Mn ferrite nanoparticles

    NASA Astrophysics Data System (ADS)

    Parvatheeswara Rao, B.; Dhanalakshmi, B.; Ramesh, S.; Subba Rao, P. S. V.

    2018-06-01

    Mn substituted Ni-Zn ferrite nanoparticles, Ni0.4Zn0.6-xMnxFe2O4 (x = 0.00-0.25 in steps of 0.05), using metal nitrates were prepared by sol-gel autocombustion in citric acid matrix. The samples were examined by X-ray diffraction and vibrating sample magnetometer techniques. Rietveld structural refinements using the XRD data were performed on the samples to consolidate various structural parameters like phase (spinel), crystallite size (24.86-37.43 nm), lattice constant (8.3764-8.4089 Å) etc and also to determine cation distributions based on profile matching and integrated intensity ratios. Saturation magnetization values (37.18-68.40 emu/g) were extracted from the measured M-H loops of these nanoparticles to estimate their magnetic moments. Experimental and calculated magnetic moments and lattice constants were used to confirm the derived cation distributions from Rietveld analysis. The results of these ferrite nanoparticles are discussed in terms of the compositional modifications, particle sizes and the corresponding cation distributions as a result of Mn substitutions.

  20. Methodology of Calculation the Terminal Settling Velocity Distribution of Spherical Particles for High Values of the Reynold's Number

    NASA Astrophysics Data System (ADS)

    Surowiak, Agnieszka; Brożek, Marian

    2014-03-01

    The particle settling velocity is the feature of separation in such processes as flowing classification and jigging. It characterizes material forwarded to the separation process and belongs to the so-called complex features because it is the function of particle density and size. i.e. the function of two simple features. The affiliation to a given subset is determined by the values of two properties and the distribution of such feature in a sample is the function of distributions of particle density and size. The knowledge about distribution of particle settling velocity in jigging process is as much important factor as knowledge about particle size distribution in screening or particle density distribution in dense media beneficiation. The paper will present a method of determining the distribution of settling velocity in the sample of spherical particles for the turbulent particle motion in which the settling velocity is expressed by the Newton formula. Because it depends on density and size of particle which are random variable of certain distributions, the settling velocity is a random variable. Applying theorems of probability, concerning distributions function of random variables, the authors present general formula of probability density function of settling velocity for the turbulent motion and particularly calculate probability density function for Weibull's forms of frequency functions of particle size and density. Distribution of settling velocity will calculate numerically and perform in graphical form. The paper presents the simulation of calculation of settling velocity distribution on the basis of real distributions of density and projective diameter of particles assuming that particles are spherical. Prędkość opadania ziarna jest cechą rozdziału w takich procesach przeróbki surowców jak klasyfikacja czy wzbogacanie w osadzarce. Cecha ta opisuje materiał kierowany do procesu rozdziału i należy do tzw. cech złożonych, ze względu na to, że jest funkcją dwóch cech prostych, którymi są: wielkość ziarna i gęstość ziarna. Przynależność do określonego podzbioru ziaren jest określona przez wartość dwóch cech, a rozkład tych cech w próbce jest funkcją rozkładów gęstości i wielkości ziarna. Znajomość rozkładu prędkości opadania ziaren w osadzarce jest istotnym parametrem jak znajomość rozkładu wielkości ziarna w procesie przesiewania czy znajomość rozkładu gęstości w procesie wzbogacania w cieczach ciężkich. W artykule przedstawiono metodykę wyliczania rozkładu prędkości opadania ziaren sferycznych w warunkach ruchu turbulentnego wyrażonego przy pomocy równania Newtona. Zarówno gęstość jak i wielkość ziarna są zmiennymi losowymi o określonych rozkładach. W związku z tym prędkość opadania ziarna jako funkcja cech prostych tj. gęstości i wielkości ziarna będzie również zmienną losową o rozkładzie, który jest funkcją rozkładów argumentów prostych. Wykorzystując twierdzenia rachunku prawdopodobieństwa odnoszące się do rozkładów funkcji zmiennych losowych przedstawiono ogólny wzór na funkcję gęstości rozkładu prędkości opadania w warunkach ruchu turbulentnego. Empiryczne rozkłady wielkości i gęstości ziaren aproksymowano rozkładem Weibulla. Rozkład prędkości opadania wyliczono numerycznie i przedstawiono w postaci graficznej. W artykule przedstawiono symulację wyliczania rozkładu prędkości opadania w oparciu o rzeczywiste rozkłady gęstości i średnicy projekcyjnej ziaren zakładając, że ziarna mają kształt sferyczny.

  1. Reliability of stiffened structural panels: Two examples

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Davis, D. Dale, Jr.; Maring, Lise D.; Krishnamurthy, Thiagaraja; Elishakoff, Isaac

    1992-01-01

    The reliability of two graphite-epoxy stiffened panels that contain uncertainties is examined. For one panel, the effect of an overall bow-type initial imperfection is studied. The size of the bow is assumed to be a random variable. The failure mode is buckling. The benefits of quality control are explored by using truncated distributions. For the other panel, the effect of uncertainties in a strain-based failure criterion is studied. The allowable strains are assumed to be random variables. A geometrically nonlinear analysis is used to calculate a detailed strain distribution near an elliptical access hole in a wing panel that was tested to failure. Calculated strains are used to predict failure. Results are compared with the experimental failure load of the panel.

  2. Simulation of alnico coercivity

    DOE PAGES

    Ke, Liqin; Skomski, Ralph; Hoffmann, Todd D.; ...

    2017-07-10

    Micromagnetic simulations of alnico show substantial deviations from Stoner-Wohlfarth behavior due to the unique size and spatial distribution of the rod-like Fe-Co phase formed during spinodal decomposition in an external magnetic field. Furthemore, the maximum coercivity is limited by single-rod effects, especially deviations from ellipsoidal shape, and by interactions between the rods. In both the exchange interaction between connected rods and magnetostatic we consider the interaction between rods, and the results of our calculations show good agreement with recent experiments. Unlike systems dominated by magnetocrystalline anisotropy, coercivity in alnico is highly dependent on size, shape, and geometric distribution of themore » Fe-Co phase, all factors that can be tuned with appropriate chemistry and thermal-magnetic annealing.« less

  3. Computational methods for analyzing the transmission characteristics of a beta particle magnetic analysis system

    NASA Technical Reports Server (NTRS)

    Singh, J. J.

    1979-01-01

    Computational methods were developed to study the trajectories of beta particles (positrons) through a magnetic analysis system as a function of the spatial distribution of the radionuclides in the beta source, size and shape of the source collimator, and the strength of the analyzer magnetic field. On the basis of these methods, the particle flux, their energy spectrum, and source-to-target transit times have been calculated for Na-22 positrons as a function of the analyzer magnetic field and the size and location of the target. These data are in studies requiring parallel beams of positrons of uniform energy such as measurement of the moisture distribution in composite materials. Computer programs for obtaining various trajectories are included.

  4. Comparative evaluation of distributed-collector solar thermal electric power plants

    NASA Technical Reports Server (NTRS)

    Fujita, T.; El Gabalawi, N.; Herrera, G. G.; Caputo, R. S.

    1978-01-01

    Distributed-collector solar thermal-electric power plants are compared by projecting power plant economics of selected systems to the 1990-2000 timeframe. The approach taken is to evaluate the performance of the selected systems under the same weather conditions. Capital and operational costs are estimated for each system. Energy costs are calculated for different plant sizes based on the plant performance and the corresponding capital and maintenance costs. Optimum systems are then determined as the systems with the minimum energy costs for a given load factor. The optimum system is comprised of the best combination of subsystems which give the minimum energy cost for every plant size. Sensitivity analysis is done around the optimum point for various plant parameters.

  5. Nanoparticle size detection limits by single particle ICP-MS for 40 elements.

    PubMed

    Lee, Sungyun; Bi, Xiangyu; Reed, Robert B; Ranville, James F; Herckes, Pierre; Westerhoff, Paul

    2014-09-02

    The quantification and characterization of natural, engineered, and incidental nano- to micro-size particles are beneficial to assessing a nanomaterial's performance in manufacturing, their fate and transport in the environment, and their potential risk to human health. Single particle inductively coupled plasma mass spectrometry (spICP-MS) can sensitively quantify the amount and size distribution of metallic nanoparticles suspended in aqueous matrices. To accurately obtain the nanoparticle size distribution, it is critical to have knowledge of the size detection limit (denoted as Dmin) using spICP-MS for a wide range of elements (other than a few available assessed ones) that have been or will be synthesized into engineered nanoparticles. Herein is described a method to estimate the size detection limit using spICP-MS and then apply it to nanoparticles composed of 40 different elements. The calculated Dmin values correspond well for a few of the elements with their detectable sizes that are available in the literature. Assuming each nanoparticle sample is composed of one element, Dmin values vary substantially among the 40 elements: Ta, U, Ir, Rh, Th, Ce, and Hf showed the lowest Dmin values, ≤10 nm; Bi, W, In, Pb, Pt, Ag, Au, Tl, Pd, Y, Ru, Cd, and Sb had Dmin in the range of 11-20 nm; Dmin values of Co, Sr, Sn, Zr, Ba, Te, Mo, Ni, V, Cu, Cr, Mg, Zn, Fe, Al, Li, and Ti were located at 21-80 nm; and Se, Ca, and Si showed high Dmin values, greater than 200 nm. A range of parameters that influence the Dmin, such as instrument sensitivity, nanoparticle density, and background noise, is demonstrated. It is observed that, when the background noise is low, the instrument sensitivity and nanoparticle density dominate the Dmin significantly. Approaches for reducing the Dmin, e.g., collision cell technology (CCT) and analyte isotope selection, are also discussed. To validate the Dmin estimation approach, size distributions for three engineered nanoparticle samples were obtained using spICP-MS. The use of this methodology confirms that the observed minimum detectable sizes are consistent with the calculated Dmin values. Overall, this work identifies the elements and nanoparticles to which current spICP-MS approaches can be applied, in order to enable quantification of very small nanoparticles at low concentrations in aqueous media.

  6. Physicochemical characterization of titanium dioxide pigments using various techniques for size determination and asymmetric flow field flow fractionation hyphenated with inductively coupled plasma mass spectrometry.

    PubMed

    Helsper, Johannes P F G; Peters, Ruud J B; van Bemmel, Margaretha E M; Rivera, Zahira E Herrera; Wagner, Stephan; von der Kammer, Frank; Tromp, Peter C; Hofmann, Thilo; Weigel, Stefan

    2016-09-01

    Seven commercial titanium dioxide pigments and two other well-defined TiO2 materials (TiMs) were physicochemically characterised using asymmetric flow field flow fractionation (aF4) for separation, various techniques to determine size distribution and inductively coupled plasma mass spectrometry (ICPMS) for chemical characterization. The aF4-ICPMS conditions were optimised and validated for linearity, limit of detection, recovery, repeatability and reproducibility, all indicating good performance. Multi-element detection with aF4-ICPMS showed that some commercial pigments contained zirconium co-eluting with titanium in aF4. The other two TiMs, NM103 and NM104, contained aluminium as integral part of the titanium peak eluting in aF4. The materials were characterised using various size determination techniques: retention time in aF4, aF4 hyphenated with multi-angle laser light spectrometry (MALS), single particle ICPMS (spICPMS), scanning electron microscopy (SEM) and particle tracking analysis (PTA). PTA appeared inappropriate. For the other techniques, size distribution patterns were quite similar, i.e. high polydispersity with diameters from 20 to >700 nm, a modal peak between 200 and 500 nm and a shoulder at 600 nm. Number-based size distribution techniques as spICPMS and SEM showed smaller modal diameters than aF4-UV, from which mass-based diameters are calculated. With aF4-MALS calculated, light-scattering-based "diameters of gyration" (Øg) are similar to hydrodynamic diameters (Øh) from aF4-UV analyses and diameters observed with SEM, but much larger than with spICPMS. A Øg/Øh ratio of about 1 indicates that the TiMs are oblate spheres or fractal aggregates. SEM observations confirm the latter structure. The rationale for differences in modal peak diameter is discussed.

  7. PHOTONICS AND NANOTECHNOLOGY Microscopic theory of optical properties of composite media with chaotically distributed nanoparticles

    NASA Astrophysics Data System (ADS)

    Shalin, A. S.

    2010-12-01

    The boundary problem of light reflection and transmission by a film with chaotically distributed nanoinclusions is considered. Based on the proposed microscopic approach, analytic expressions are derived for distributions inside and outside the nanocomposite medium. Good agreement of the results with exact calculations and (at low concentrations of nanoparticles) with the integral Maxwell-Garnett effective-medium theory is demonstrated. It is shown that at high nanoparticle concentrations, averaging the dielectric constant in volume as is done within the framework of the effective-medium theory yields overestimated values of the optical film density compared to the values yielded by the proposed microscopic approach. We also studied the dependence of the reflectivity of a system of gold nanoparticles on their size, the size dependence of the plasmon resonance position along the wavelength scale, and demonstrated a good agreement with experimental data.

  8. Experimental and simulation studies on the behavior of signal harmonics in magnetic particle imaging.

    PubMed

    Murase, Kenya; Konishi, Takashi; Takeuchi, Yuki; Takata, Hiroshige; Saito, Shigeyoshi

    2013-07-01

    Our purpose in this study was to investigate the behavior of signal harmonics in magnetic particle imaging (MPI) by experimental and simulation studies. In the experimental studies, we made an apparatus for MPI in which both a drive magnetic field (DMF) and a selection magnetic field (SMF) were generated with a Maxwell coil pair. The MPI signals from magnetic nanoparticles (MNPs) were detected with a solenoid coil. The odd- and even-numbered harmonics were calculated by Fourier transformation with or without background subtraction. The particle size of the MNPs was measured by transmission electron microscopy (TEM), dynamic light-scattering, and X-ray diffraction methods. In the simulation studies, the magnetization and particle size distribution of MNPs were assumed to obey the Langevin theory of paramagnetism and a log-normal distribution, respectively. The odd- and even-numbered harmonics were calculated by Fourier transformation under various conditions of DMF and SMF and for three different particle sizes. The behavior of the harmonics largely depended on the size of the MNPs. When we used the particle size obtained from the TEM image, the simulation results were most similar to the experimental results. The similarity between the experimental and simulation results for the even-numbered harmonics was better than that for the odd-numbered harmonics. This was considered to be due to the fact that the odd-numbered harmonics were more sensitive to background subtraction than were the even-numbered harmonics. This study will be useful for a better understanding, optimization, and development of MPI and for designing MNPs appropriate for MPI.

  9. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less

  10. Sample size calculation for studies with grouped survival data.

    PubMed

    Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros

    2018-06-10

    Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.

  11. A Physical Model to Estimate Snowfall over Land using AMSU-B Observations

    NASA Technical Reports Server (NTRS)

    Kim, Min-Jeong; Weinman, J. A.; Olson, W. S.; Chang, D.-E.; Skofronick-Jackson, G.; Wang, J. R.

    2008-01-01

    In this study, we present an improved physical model to retrieve snowfall rate over land using brightness temperature observations from the National Oceanic and Atmospheric Administration's (NOAA) Advanced Microwave Sounder Unit-B (AMSU-B) at 89 GHz, 150 GHz, 183.3 +/- 1 GHz, 183.3 +/- 3 GHz, and 183.3 +/- 7 GHz. The retrieval model is applied to the New England blizzard of March 5, 2001 which deposited about 75 cm of snow over much of Vermont, New Hampshire, and northern New York. In this improved physical model, prior retrieval assumptions about snowflake shape, particle size distributions, environmental conditions, and optimization methodology have been updated. Here, single scattering parameters for snow particles are calculated with the Discrete-Dipole Approximation (DDA) method instead of assuming spherical shapes. Five different snow particle models (hexagonal columns, hexagonal plates, and three different kinds of aggregates) are considered. Snow particle size distributions are assumed to vary with air temperature and to follow aircraft measurements described by previous studies. Brightness temperatures at AMSU-B frequencies for the New England blizzard are calculated using these DDA calculated single scattering parameters and particle size distributions. The vertical profiles of pressure, temperature, relative humidity and hydrometeors are provided by MM5 model simulations. These profiles are treated as the a priori data base in the Bayesian retrieval algorithm. In algorithm applications to the blizzard data, calculated brightness temperatures associated with selected database profiles agree with AMSU-B observations to within about +/- 5 K at all five frequencies. Retrieved snowfall rates compare favorably with the near-concurrent National Weather Service (NWS) radar reflectivity measurements. The relationships between the NWS radar measured reflectivities Z(sub e) and retrieved snowfall rate R for a given snow particle model are derived by a histogram matching technique. All of these Z(sub e)-R relationships fall in the range of previously established Z(sub e)-R relationships for snowfall. This suggests that the current physical model developed in this study can reliably estimate the snowfall rate over land using the AMSU-B measured brightness temperatures.

  12. Measurement of the bed material of gravel-bed rivers

    USGS Publications Warehouse

    Milhous, R.T.; ,

    2002-01-01

    The measurement of the physical properties of a gravel-bed river is important in the calculation of sediment transport and physical habitat values for aquatic animals. These properties are not always easy to measure. One recent report on flushing of fines from the Klamath River did not contain information on one location because the grain size distribution of the armour could not be measured on a dry river bar. The grain size distribution could have been measured using a barrel sampler and converting the measurements to the same as would have been measured if a dry bar existed at the site. In another recent paper the porosity was calculated from an average value relation from the literature. The results of that paper may be sensitive to the actual value of porosity. Using the bulk density sampling technique based on a water displacement process presented in this paper the porosity could have been calculated from the measured bulk density. The principle topics of this paper are the measurement of the size distribution of the armour, and measurement of the porosity of the substrate. The 'standard' method of sampling of the armour is to do a Wolman-type count of the armour on a dry section of the river bed. When a dry bar does not exist the armour in an area of the wet streambed is to sample and the measurements transformed analytically to the same type of results that would have been obtained from the standard Wolman procedure. A comparison of the results for the San Miguel River in Colorado shows significant differences in the median size of the armour. The method use to determine the porosity is not 'high-tech' and there is a need improve knowledge of the porosity because of the importance of porosity in the aquatic ecosystem. The technique is to measure the in-situ volume of a substrate sample by measuring the volume of a frame over the substrate and then repeated the volume measurement after the sample is obtained from within the frame. The difference in the volumes is the volume of the sample.

  13. Rhizosphere size

    NASA Astrophysics Data System (ADS)

    Kuzyakov, Yakov; Razavi, Bahar

    2017-04-01

    Estimation of the soil volume affected by roots - the rhizosphere - is crucial to assess the effects of plants on properties and processes in soils and dynamics of nutrients, water, microorganisms and soil organic matter. The challenges to assess the rhizosphere size are: 1) the continuum of properties between the root surface and root-free soil, 2) differences in the distributions of various properties (carbon, microorganisms and their activities, various nutrients, enzymes, etc.) along and across the roots, 3) temporal changes of properties and processes. Thus, to describe the rhizosphere size and root effects, a holistic approach is necessary. We collected literature and own data on the rhizosphere gradients of a broad range of physico-chemical and biological properties: pH, CO2, oxygen, redox potential, water uptake, various nutrients (C, N, P, K, Ca, Mg, Mn and Fe), organic compounds (glucose, carboxylic acids, amino acids), activities of enzymes of C, N, P and S cycles. The collected data were obtained based on the destructive approaches (thin layer slicing), rhizotron studies and in situ visualization techniques: optodes, zymography, sensitive gels, 14C and neutron imaging. The root effects were pronounced from less than 0.5 mm (nutrients with slow diffusion) up to more than 50 mm (for gases). However, the most common effects were between 1 - 10 mm. Sharp gradients (e.g. for P, carboxylic acids, enzyme activities) allowed to calculate clear rhizosphere boundaries and so, the soil volume affected by roots. The first analyses were done to assess the effects of soil texture and moisture as well as root system and age on these gradients. The most properties can be described by two curve types: exponential saturation and S curve, each with increasing and decreasing concentration profiles from the root surface. The gradient based distribution functions were calculated and used to extrapolate on the whole soil depending on the root density and rooting intensity. We conclude that despite the specific effects of plants and soil on the rhizosphere size, the most common distribution functions can be calculated for individual roots and extrapolated for the whole soil profile.

  14. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  15. Size resolved ultrafine particles emission model--a continues size distribution approach.

    PubMed

    Nikolova, Irina; Janssen, Stijn; Vrancken, Karl; Vos, Peter; Mishra, Vinit; Berghmans, Patrick

    2011-08-15

    A new parameterization for size resolved ultrafine particles (UFP) traffic emissions is proposed based on the results of PARTICULATES project (Samaras et al., 2005). It includes the emission factors from the Emission Inventory Guidebook (2006) (total number of particles, #/km/veh), the shape of the corresponding particle size distribution given in PARTICULATES and data for the traffic activity. The output of the model UFPEM (UltraFine Particle Emission Model) is a sum of continuous distributions of ultrafine particles emissions per vehicle type (passenger cars and heavy duty vehicles), fuel (petrol and diesel) and average speed representative for urban, rural and highway driving. The results from the parameterization are compared with measured total number of ultrafine particles and size distributions in a tunnel in Antwerp (Belgium). The measured UFP concentration over the entire campaign shows a close relation to the traffic activity. The modelled concentration is found to be lower than the measured in the campaign. The average emission factor from the measurement is 4.29E+14 #/km/veh whereas the calculated is around 30% lower. A comparison of emission factors with literature is done as well and in overall a good agreement is found. For the size distributions it is found that the measured distributions consist of three modes--Nucleation, Aitken and accumulation and most of the ultrafine particles belong to the Nucleation and the Aitken modes. The modelled Aitken mode (peak around 0.04-0.05 μm) is found in a good agreement both as amplitude of the peak and the number of particles whereas the modelled Nucleation mode is shifted to smaller diameters and the peak is much lower that the observed. Time scale analysis shows that at 300 m in the tunnel coagulation and deposition are slow and therefore neglected. The UFPEM emission model can be used as a source term in dispersion models. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  17. Short-Term Memory in Orthogonal Neural Networks

    NASA Astrophysics Data System (ADS)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-04-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.

  18. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  19. Calculations of B1 Distribution, Specific Energy Absorption Rate, and Intrinsic Signal-to-Noise Ratio for a Body-Size Birdcage Coil Loaded with Different Human Subjects at 64 and 128 MHz.

    PubMed

    Liu, W; Collins, C M; Smith, M B

    2005-03-01

    A numerical model of a female body is developed to study the effects of different body types with different coil drive methods on radio-frequency magnetic ( B 1 ) field distribution, specific energy absorption rate (SAR), and intrinsic signal-to-noise ratio (ISNR) for a body-size birdcage coil at 64 and 128 MHz. The coil is loaded with either a larger, more muscular male body model (subject 1) or a newly developed female body model (subject 2), and driven with two-port (quadrature), four-port, or many (ideal) sources. Loading the coil with subject 1 results in significantly less homogeneous B 1 field, higher SAR, and lower ISNR than those for subject 2 at both frequencies. This dependence of MR performance and safety measures on body type indicates a need for a variety of numerical models representative of a diverse population for future calculations. The different drive methods result in similar B 1 field patterns, SAR, and ISNR in all cases.

  20. Stratospheric aerosols and climatic change

    NASA Technical Reports Server (NTRS)

    Toon, O. B.; Pollack, J. B.

    1978-01-01

    Stratospht1ic sulfuric acid particles scatter and absorb sunlight and they scatter, absorb and emit terrestrial thermal radiation. These interactions play a role in the earth's radiation balance and therefore affect climate. The stratospheric aerosols are perturbed by volcanic injection of SO2 and ash, by aircraft injection of SO2, by rocket exhaust of Al2O3 and by tropospheric mixing of particles and pollutant SO2 and COS. In order to assess the effects of these perturbations on climate, the effects of the aerosols on the radiation balance must be understood and in order to understand the radiation effects the properties of the aerosols must be known. The discussion covers the aerosols' effect on the radiation balance. It is shown that the aerosol size distribution controls whether the aerosols will tend to warm or cool the earth's surface. Calculations of aerosol properties, including size distribution, for various perturbation sources are carried out on the basis of an aerosol model. Calculations are also presented of the climatic impact of perturbed aerosols due to volcanic eruptions and Space Shuttle flights.

  1. Size distribution of particle-phase molecular markers during a severe winter pollution episode.

    PubMed

    Kleeman, Michael J; Riddle, Sarah G; Jakober, Chris A

    2008-09-01

    Airborne particulate matter was collected using filter samplers and cascade impactors in six size fractions below 1.8 microm during a severe winter air pollution event at three sites in the Central Valley of California. The smallest size fraction analyzed was 0.056 < Dp <0.1 microm particle diameter, which accounts for the majority of the mass in the ultrafine (PM0.1) size range. Separate samples were collected during the daytime (10 a.m. to 6 p.m. PST) and nighttime (8 p.m. to 8 a.m. PST) to characterize diurnal patterns. Each sample was extracted with organic solvents and analyzed using gas chromatography mass spectrometry for molecular markers that can be used for size-resolved source apportionment calculations. Colocated impactor and filter measurements were highly correlated (R8 > 0.8) for retene, benzo[ghi]flouranthene, chrysene, benzo[b]fluoranthene, benzo[k]fluoranthene, benzo[e]pyrene, benzo[a]pyrene, perylene, indeno[1,2,3-cd]pyrene, benzo[ghi]perylene, coronene, MW302 polycyclic aromatic hydrocarbon (PAHs), 17beta(H)-21alpha(H)-30-norhopane, 17alpha(H)-21beta(H)-hopane, alphabetabeta-20R-C29-ethylcholestane, levoglucosan, and cholesterol. Of these compounds, levoglucosan was present in the highest concentration (60-2080 ng m(-3)) followed by cholesterol (6-35 ng m(-3)), PAHs (2-38 ng m(-3)), and hopanes and steranes (0-2 ng m(-3)). Nighttime concentrations were higher than daytime concentrations in all cases. Organic compound size distributions were generally similar to the total carbon size distributions during the nighttime but showed greater variability during the daytime. This may reflect the dominance of fresh emission in the stagnant surface layer during the evening hours and the presence of aged organic aerosol at the surface during the daytime when the atmosphere is better mixed. All of the measured organic compound particle size distributions had a single mode that peaked somewhere between 0.18 and 0.56 microm, but the width of each distribution varied by compound. Cholesterol generally had the broadest particle size distribution, while benzo[ghi]perylene and 17alpha(H)-21beta(H)-29-norhopane generally had sharper peaks. The difference between the size distributions of the various particle-phase organic compounds reflects the fact that these compounds exist in particles emitted from different sources. The results of the current study will prove useful for size-resolved source apportionment exercises.

  2. Features of Electron Density Distribution in Delafossite Cualo2

    NASA Astrophysics Data System (ADS)

    Pogoreltsev, A. I.; Schmidt, S. V.; Gavrilenko, A. N.; Shulgin, D. A.; Korzun, B. V.; Matukhin, V. L.

    2015-07-01

    We have used pulsed 63,65Cu nuclear quadrupole resonance at room temperature to study the semiconductor compound CuAlO2 with a delafossite crystal structure, and we have determined the quadrupole frequency νQ = 28.12 MHz and the asymmetry parameter η ~ 0, which we used to study the features of the electron density distribution in the vicinity of the quadrupolar nucleus. In order to take into account the influence of correlation effects on the electric field gradient, we carried out ab initio calculations within the density functional theory (DFT) approximation using a set of correlation functionals VWN1RPA, VWN5, PW91LDA, CPW91, and B3LYP1. We mapped the electron density distribution in the vicinity of the quadrupolar copper nucleus for the Cu7Al6o{14/- 1} cluster and we calculated the size of the LUMO-HOMO gap, Δ ~ 3.33 eV. We established the anisotropy of the spatial electron density distribution. Based on analysis of the electron density distribution obtained, we suggest that the bond in CuAlO2 is not purely covalent.

  3. Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation

    PubMed Central

    Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong

    2013-01-01

    Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324

  4. Insight on agglomerates of gold nanoparticles in glass based on surface plasmon resonance spectrum: study by multi-spheres T-matrix method

    NASA Astrophysics Data System (ADS)

    Avakyan, L. A.; Heinz, M.; Skidanenko, A. V.; Yablunovski, K. A.; Ihlemann, J.; Meinertz, J.; Patzig, C.; Dubiel, M.; Bugaev, L. A.

    2018-01-01

    The formation of a localized surface plasmon resonance (SPR) spectrum of randomly distributed gold nanoparticles in the surface layer of silicate float glass, generated and implanted by UV ArF-excimer laser irradiation of a thin gold layer sputter-coated on the glass surface, was studied by the T-matrix method, which enables particle agglomeration to be taken into account. The experimental technique used is promising for the production of submicron patterns of plasmonic nanoparticles (given by laser masks or gratings) without damage to the glass surface. Analysis of the applicability of the multi-spheres T-matrix (MSTM) method to the studied material was performed through calculations of SPR characteristics for differently arranged and structured gold nanoparticles (gold nanoparticles in solution, particles pairs, and core-shell silver-gold nanoparticles) for which either experimental data or results of the modeling by other methods are available. For the studied gold nanoparticles in glass, it was revealed that the theoretical description of their SPR spectrum requires consideration of the plasmon coupling between particles, which can be done effectively by MSTM calculations. The obtained statistical distributions over particle sizes and over interparticle distances demonstrated the saturation behavior with respect to the number of particles under consideration, which enabled us to determine the effective aggregate of particles, sufficient to form the SPR spectrum. The suggested technique for the fitting of an experimental SPR spectrum of gold nanoparticles in glass by varying the geometrical parameters of the particles aggregate in the recurring calculations of spectrum by MSTM method enabled us to determine statistical characteristics of the aggregate: the average distance between particles, average size, and size distribution of the particles. The fitting strategy of the SPR spectrum presented here can be applied to nanoparticles of any nature and in various substances, and, in principle, can be extended for particles with non-spherical shapes, like ellipsoids, rod-like and other T-matrix-solvable shapes.

  5. Assessing hail risk for a building portfolio by generating stochastic events

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie

    2015-04-01

    Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.

  6. A scattering methodology for droplet sizing of e-cigarette aerosols.

    PubMed

    Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine

    2016-10-01

    Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.

  7. Reconstruction of sediment transport pathways in modern microtidal sand flat by multiple classification analysis

    NASA Astrophysics Data System (ADS)

    Yamashita, S.; Nakajo, T.; Naruse, H.

    2009-12-01

    In this study, we statistically classified the grain size distribution of the bottom surface sediment on a microtidal sand flat to analyze the depositional processes of the sediment. Multiple classification analysis revealed that two types of sediment populations exist in the bottom surface sediment. Then, we employed the sediment trend model developed by Gao and Collins (1992) for the estimation of sediment transport pathways. As a result, we found that statistical discrimination of the bottom surface sediment provides useful information for the sediment trend model while dealing with various types of sediment transport processes. The microtidal sand flat along the Kushida River estuary, Ise Bay, central Japan, was investigated, and 102 bottom surface sediment samples were obtained. Then, their grain size distribution patterns were measured by the settling tube method, and each grain size distribution parameter (mud and gravel contents, mean grain size, coefficient of variance (CV), skewness, kurtosis, 5, 25, 50, 75, and 95 percentile) was calculated. Here, CV is the normalized sorting value divided by the mean grain size. Two classical statistical methods—principal component analysis (PCA) and fuzzy cluster analysis—were applied. The results of PCA showed that the bottom surface sediment of the study area is mainly characterized by grain size (mean grain size and 5-95 percentile) and the CV value, indicating predominantly large absolute values of factor loadings in primal component (PC) 1. PC1 is interpreted as being indicative of the grain-size trend, in which a finer grain-size distribution indicates better size sorting. The frequency distribution of PC1 has a bimodal shape and suggests the existence of two types of sediment populations. Therefore, we applied fuzzy cluster analysis, the results of which revealed two groupings of the sediment (Cluster 1 and Cluster 2). Cluster 1 shows a lower value of PC1, indicating coarse and poorly sorted sediments. Cluster 1 sediments are distributed around the branched channel from Kushida River and show an expanding distribution from the river mouth toward the northeast direction. Cluster 2 shows a higher value of PC1, indicating fine and well-sorted sediments; this cluster is distributed in a distant area from the river mouth, including the offshore region. Therefore, Cluster 1 and Cluster 2 are interpreted as being deposited by fluvial and wave processes, respectively. Finally, on the basis of this distribution pattern, the sediment trend model was applied in areas dominated separately by fluvial and wave processes. Resultant sediment transport patterns showed good agreement with those obtained by field observations. The results of this study provide an important insight into the numerical models of sediment transport.

  8. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C; Huet, C; Barraux, V

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less

  9. Recovering 3D Particle Size Distributions from 2D Sections

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Olson, Daniel A.

    2017-01-01

    We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible and practical method to do this, show which of these techniques gives the most faithful conversions, and provide (online) short computer codes to calculate both 2D- 3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter. Proper determination of particle size distributions in chondrites - for chondrules, CAIs, and metalgrains - is of basic importance for assessing the processes of formation and/or of accretion of theseparticles into their parent bodies. To date, most information of this sort is gathered from 2D samplescut from a rock such as in microscopic analysis of thin sections, or SEM maps of planar surfaces(Dodd 1976, Hughes 1978a,b; Rubin and Keil 1984, Rubin and Grossman 1987, Grossman et al1988, Rubin 1989, Metzler et al 1992, Kuebler et al 1999, Nelson and Rubin 2002, Schneider et al 2003, Hezel et al 2008; Fisher et al 2014; for an exhaustive review with numerous references seeFriedrich et al 2014). While qualitative discrimination between chondrite types can readily be doneusing data of this sort, any deeper exploration of the processes by which chondrite constituents werecreated or emplaced into their parent requires a more quantitative approach.

  10. Aspects of droplet and particle size control in miniemulsions

    NASA Astrophysics Data System (ADS)

    Saygi-Arslan, Oznur

    Miniemulsion polymerization has become increasingly popular among researchers since it can provide significant advantages over conventional emulsion polymerization in certain cases, such as production of high-solids, low-viscosity latexes with better stability and polymerization of highly water-insoluble monomers. Miniemulsions are relatively stable oil (e.g., monomer) droplets, which can range in size from 50 to 500 nm, and are normally dispersed in an aqueous phase with the aid of a surfactant and a costabilizer. These droplets are the primary locus of the initiation of the polymerization reaction. Since particle formation takes place in the monomer droplets, theoretically, in miniemulsion systems the final particle size can be controlled by the initial droplet size. The miniemulsion preparation process typically generates broad droplet size distributions and there is no complete treatment in the literature regarding the control of the mean droplet size or size distribution. This research aims to control the miniemulsion droplet size and its distribution. In situ emulsification, where the surfactant is synthesized spontaneously at the oil/water interface, has been put forth as a simpler method for the preparation of miniemulsions-like systems. Using the in situ method of preparation, emulsion stability and droplet and particle sizes were monitored and compared with conventional emulsions and miniemulsions. Styrene emulsions prepared by the in situ method do not demonstrate the stability of a comparable miniemulsion. Upon polymerization, the final particle size generated from the in situ emulsion did not differ significantly from the comparable conventional emulsion polymerization; the reaction mechanism for in situ emulsions is more like conventional emulsion polymerization rather than miniemulsion polymerization. Similar results were found when the in situ method was applied to controlled free radical polymerizations (CFRP), which have been advanced as a potential application of the method. Molecular weight control was found to be achieved via diffusion of the CFRP agents through the aqueous phase owing to limited water solubilities. The effects of adsorption rate and energy on the droplet size and size distribution of miniemulsions using different surfactants (sodium lauryl sulfate (SLS), sodium dodecylbenzene sulfonate (SDBS), Dowfax 2A1, Aerosol OT-75PG, sodium n-octyl sulfate (SOS), and sodium n-hexadecyl sulfate (SHS)) were analyzed. For this purpose, first, the dynamics of surfactant adsorption at an oil/water interface were examined over a range of surfactant concentrations by the drop volume method and then adsorption rates of the different surfactants were determined for the early stages of adsorption. The results do not show a direct relationship between adsorption rate and miniemulsion droplet size and size distribution. Adsorption energies of these surfactants were also calculated by the Langmuir adsorption isotherm equation and no correlation between adsorption energy and miniemulsion droplet size was found. In order to understand the mechanism of miniemulsification process, the effects of breakage and coalescence processes on droplet size distributions were observed at different surfactant concentrations, monomer ratios, and homogenization conditions. A coalescence and breakup mechanism for miniemulsification is proposed to explain the size distribution of droplets. The multimodal droplet size distribution of ODMA miniemulsions was controlled by the breakage mechanism. The results also showed that, at a surfactant concentration when 100% surface coverage was obtained, the droplet size distribution became unimodal.

  11. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  12. Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2014-03-01

    We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.

  13. Geochemistry of sediments in the Northern and Central Adriatic Sea

    NASA Astrophysics Data System (ADS)

    De Lazzari, A.; Rampazzo, G.; Pavoni, B.

    2004-03-01

    Major, minor and trace elements, loss of ignition, specific surface area, quantities of calcite and dolomite, qualitative mineralogical composition, grain-size distribution and organic micropollutants (PAH, PCB, DDT) were determined on surficial marine sediments sampled during the 1990 ASCOP (Adriatic Scientific Cooperative Program) cruise. Mineralogical composition and carbonate content of the samples were found to be comparable with data previously reported in the literature, whereas geochemical composition and distribution of major, minor and trace elements for samples in international waters and in the central basin have never been reported before. The large amount of information contained in the variables of different origin has been processed by means of a comprehensive approach which establishes the relations among the components through the mathematical-statistical calculation of principal components (factors). These account for the major part of data variance loosing only marginal parts of information and are independent from the units of measure. The sample descriptors concerning natural components and contamination load are discussed by means of a statistical model based on an R-mode Factor analysis calculating four significant factors which explain 86.8% of the total variance, and represent important relationships between grain size, mineralogy, geochemistry and organic micropollutants. A description and an interpretation of factor composition is discussed on the basis of pollution inputs, basin geology and hydrodynamics. The areal distribution of the factors showed that it is the fine grain-size fraction, with oxides and hydroxides of colloidal origin, which are the main means of transport and thus the principal link between chemical, physical and granulometric elements in the Adriatic.

  14. The Role of Aerosols on Precipitation Processes: Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, X.; Matsui, T.

    2012-01-01

    Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, the sub-tropics (Florida) and midlatitudes using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CeN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for these cases. It is shown that since the low (CN case produces fewer droplets, larger sizes develop due to greater condensational and collection growth, leading to a broader size spectrum in comparison to the high CCN case. Sensitivity tests were performed to identify the impact of ice processes, radiation and large-scale influence on cloud-aerosol interactive processes, especially regarding surface rainfall amounts and characteristics (i.e., heavy or convective versus light or stratiform types). In addition, an inert tracer was included to follow the vertical redistribution of aerosols by cloud processes. We will also give a brief review from observational evidence on the role of aerosol on precipitation processes.

  15. Enhanced centrifuge-based approach to powder characterization

    NASA Astrophysics Data System (ADS)

    Thomas, Myles Calvin

    Many types of manufacturing processes involve powders and are affected by powder behavior. It is highly desirable to implement tools that allow the behavior of bulk powder to be predicted based on the behavior of only small quantities of powder. Such descriptions can enable engineers to significantly improve the performance of powder processing and formulation steps. In this work, an enhancement of the centrifuge technique is proposed as a means of powder characterization. This enhanced method uses specially designed substrates with hemispherical indentations within the centrifuge. The method was tested using simulations of the momentum balance at the substrate surface. Initial simulations were performed with an ideal powder containing smooth, spherical particles distributed on substrates designed with indentations. The van der Waals adhesion between the powder, whose size distribution was based on an experimentally-determined distribution from a commercial silica powder, and the indentations was calculated and compared to the removal force created in the centrifuge. This provided a way to relate the powder size distribution to the rotational speed required for particle removal for various indentation sizes. Due to the distinct form of the data from these simulations, the cumulative size distribution of the powder and the Hamaker constant for the system were be extracted. After establishing adhesion force characterization for an ideal powder, the same proof-of-concept procedure was followed for a more realistic system with a simulated rough powder modeled as spheres with sinusoidal protrusions and intrusions around the surface. From these simulations, it was discovered that an equivalent powder of smooth spherical particles could be used to describe the adhesion behavior of the rough spherical powder by establishing a size-dependent 'effective' Hamaker constant distribution. This development made it possible to describe the surface roughness effects of the entire powder through one adjustable parameter that was linked to the size distribution. It is important to note that when the engineered substrates (hemispherical indentations) were applied, it was possible to extract both powder size distribution and effective Hamaker constant information from the simulated centrifuge adhesion experiments. Experimental validation of the simulated technique was performed with a silica powder dispersed onto a stainless steel substrate with no engineered surface features. Though the proof-of-concept work was accomplished for indented substrates, non-ideal, relatively flat (non-indented) substrates were used experimentally to demonstrate that the technique can be extended to this case. The experimental data was then used within the newly developed simulation procedure to show its application to real systems. In the absence of engineered features on the substrates, it was necessary to specify the size distribution of the powder as an input to the simulator. With this information, it was possible to extract an effective Hamaker constant distribution and when the effective Hamaker constant distribution was applied in conjunction with the size distribution, the observed adhesion force distribution was described precisely. An equation was developed that related the normalized effective Hamaker constants (normalized by the particle diameter) to the particle diameter was formulated from the effective Hamaker constant distribution. It was shown, by application of the equation, that the adhesion behavior of an ideal (smooth, spherical) powder with an experimentally-validated, effective Hamaker constant distribution could be used to effectively represent that of a realistic powder. Thus, the roughness effects and size variations of a real powder are captured in this one distributed parameter (effective Hamaker constant distribution) which provides a substantial improvement to the existing technique. This can lead to better optimization of powder processing by enhancing powder behavior models.

  16. Exact Solution of Population Redistributions in a Migration Model

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Wen; Zhang, Li-Jie; Yang, Guo-Hong; Xu, Xin-Jian

    2013-10-01

    We study a migration model, in which individuals migrate from one community to another. The choices of the source community i and the destination one j are proportional to some power of the population of i (kαi) and j (kβj), respectively. Both analytical calculation and numerical simulation show that the population distribution of communities in stationary states is determined by the parameters α and β. The distribution is widely homogeneous with a characteristic size if α > β. Whereas, for α < β, the distribution is highly heterogeneous with the emergence of condensing phenomenon. Between the two regimes, α = β, the distribution gradually shifts from the nonmonotonous (α < 0) to scale-free (α > 0).

  17. A high-performance Fortran code to calculate spin- and parity-dependent nuclear level densities

    NASA Astrophysics Data System (ADS)

    Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.

    2013-01-01

    A high-performance Fortran code is developed to calculate the spin- and parity-dependent shell model nuclear level densities. The algorithm is based on the extension of methods of statistical spectroscopy and implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The proton-neutron formalism is used. We have applied the method for calculating the level densities for a set of nuclei in the sd-, pf-, and pf+g- model spaces. Examples of the calculations for 28Si (in the sd-model space) and 64Ge (in the pf+g-model space) are presented. To illustrate the power of the method we estimate the ground state energy of 64Ge in the larger model space pf+g, which is not accessible to direct shell model diagonalization due to the prohibitively large dimension, by comparing with the nuclear level densities at low excitation energy calculated in the smaller model space pf. Program summaryProgram title: MM Catalogue identifier: AENM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 193181 No. of bytes in distributed program, including test data, etc.: 1298585 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Any architecture with a Fortran 90 compiler and MPI. Operating system: Linux. RAM: Proportional to the system size, in our examples, up to 75Mb Classification: 17.15. External routines: MPICH2 (http://www.mcs.anl.gov/research/projects/mpich2/) Nature of problem: Calculating of the spin- and parity-dependent nuclear level density. Solution method: The algorithm implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The code is parallelized using the Message Passing Interface and a master-slaves dynamical load-balancing approach. Restrictions: The program uses two-body interaction in a restricted single-level basis. For example, GXPF1A in the pf-valence space. Running time: Depends on the system size and the number of processors used (from 1 min to several hours).

  18. Physically based method for measuring suspended-sediment concentration and grain size using multi-frequency arrays of acoustic-doppler profilers

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.; Griffiths, Ronald; Dean, David

    2014-01-01

    As the result of a 12-year program of sediment-transport research and field testing on the Colorado River (6 stations in UT and AZ), Yampa River (2 stations in CO), Little Snake River (1 station in CO), Green River (1 station in CO and 2 stations in UT), and Rio Grande (2 stations in TX), we have developed a physically based method for measuring suspended-sediment concentration and grain size at 15-minute intervals using multifrequency arrays of acoustic-Doppler profilers. This multi-frequency method is able to achieve much higher accuracies than single-frequency acoustic methods because it allows removal of the influence of changes in grain size on acoustic backscatter. The method proceeds as follows. (1) Acoustic attenuation at each frequency is related to the concentration of silt and clay with a known grain-size distribution in a river cross section using physical samples and theory. (2) The combination of acoustic backscatter and attenuation at each frequency is uniquely related to the concentration of sand (with a known reference grain-size distribution) and the concentration of silt and clay (with a known reference grain-size distribution) in a river cross section using physical samples and theory. (3) Comparison of the suspended-sand concentrations measured at each frequency using this approach then allows theory-based calculation of the median grain size of the suspended sand and final correction of the suspended-sand concentration to compensate for the influence of changing grain size on backscatter. Although this method of measuring suspended-sediment concentration is somewhat less accurate than using conventional samplers in either the EDI or EWI methods, it is much more accurate than estimating suspended-sediment concentrations using calibrated pump measurements or single-frequency acoustics. Though the EDI and EWI methods provide the most accurate measurements of suspended-sediment concentration, these measurements are labor-intensive, expensive, and may be impossible to collect at time intervals less than discharge-independent changes in suspended-sediment concentration can occur (< hours). Therefore, our physically based multi-frequency acoustic method shows promise as a cost-effective, valid approach for calculating suspended-sediment loads in river at a level of accuracy sufficient for many scientific and management purposes.

  19. Thermodynamics of Macromolecular Association in Heterogeneous Crowding Environments: Theoretical and Simulation Studies with a Simplified Model.

    PubMed

    Ando, Tadashi; Yu, Isseki; Feig, Michael; Sugita, Yuji

    2016-11-23

    The cytoplasm of a cell is crowded with many different kinds of macromolecules. The macromolecular crowding affects the thermodynamics and kinetics of biological reactions in a living cell, such as protein folding, association, and diffusion. Theoretical and simulation studies using simplified models focus on the essential features of the crowding effects and provide a basis for analyzing experimental data. In most of the previous studies on the crowding effects, a uniform crowder size is assumed, which is in contrast to the inhomogeneous size distribution of macromolecules in a living cell. Here, we evaluate the free energy changes upon macromolecular association in a cell-like inhomogeneous crowding system via a theory of hard-sphere fluids and free energy calculations using Brownian dynamics trajectories. The inhomogeneous crowding model based on 41 different types of macromolecules represented by spheres with different radii mimics the physiological concentrations of macromolecules in the cytoplasm of Mycoplasma genitalium. The free energy changes of macromolecular association evaluated by the theory and simulations were in good agreement with each other. The crowder size distribution affects both specific and nonspecific molecular associations, suggesting that not only the volume fraction but also the size distribution of macromolecules are important factors for evaluating in vivo crowding effects. This study relates in vitro experiments on macromolecular crowding to in vivo crowding effects by using the theory of hard-sphere fluids with crowder-size heterogeneity.

  20. [The boundary ranges of the free flight of particles of gunpowder and metals in shots from a hand firearm].

    PubMed

    Popov, V L; Isakov, V D; Krivozheĭko, A G

    1990-01-01

    On the basis of equations of external ballistics and probability theory the largest possible distances of free (independent) flight of gunshot powder and metal particles having different forms and sizes were calculated. Experimental control of the calculated data for different types of battle and sports hand fire-arms was carried out. The correspondence of the calculated data to maximal free (independent) particle flight in blank shots was stated. In experiments with cartridges equipped with bullets the distances of free particle flight were significantly lesser (by 53-65%) which may be connected with effect of gunshot projectile on the process of particle distribution. Reversed adapted formulas and calculation variants are presented.

  1. Organic matter on the early surface of Mars: An assessment of the contribution by interplanetary dust

    NASA Technical Reports Server (NTRS)

    Flynn, G. J.

    1993-01-01

    Calculations by Anders and Chyba et al. have recently revived interest in the suggestion that organic compounds important to the development of life were delivered to the primitive surface of the Earth by comets, asteroids or the interplanetary dust derived from these two sources. Anders has shown that the major post-accretion contribution of extraterrestrial organic matter to the surface of the Earth is from interplanetary dust. Since Mars is a much more favorable site for the gentle deceleration of interplanetary dust particles than is Earth, model calculations show that biologically important organic compounds are likely to have been delivered to the early surface of Mars by the interplanetary dust in an order-of-magnitude higher surface density than onto the early Earth. Using the method described by Flynn and McKay, the size frequency distribution, and the atmospheric entry velocity distribution of IDP's at Mars were calculated. The entry velocity distribution, coupled with the atmospheric entry heating model developed by Whipple and extended by Fraundorf was used to calculate the fraction of the particles in each mass decade which survives atmospheric entry without melting (i.e., those not heated above 1600K). The incident mass and surviving mass in each mass decade are shown for both Earth and Mars.

  2. DETACHMENT OF BACTERIOPHAGE FROM ITS CARRIER PARTICLES.

    PubMed

    Hetler, D M; Bronfenbrenner, J

    1931-05-20

    The active substance (phage) present in the lytic broth filtrate is distributed through the medium in the form of particles. These particles vary in size within broad limits. The average size of these particles as calculated on the basis of the rate of diffusion approximates 4.4 mmicro in radius. Fractionation by means of ultrafiltration permits partial separation of particles of different sizes. Under conditions of experiments here reported the particles varied in the radius size from 0.6 mmicro to 11.4 mmicro. The active agent apparently is not intimately identified with these particles. It is merely carried by them by adsorption, and under suitable experimental conditions it can be detached from the larger particles and redistributed on smaller particles of the medium.

  3. Spot size measurement of a flash-radiography source using the pinhole imaging method

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Xie, Yu-Tong; Liu, Yun-Long; Long, Quan-Hong

    2016-07-01

    The spot size of the X-ray source is a key parameter of a flash-radiography facility, and is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam are tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.

  4. Computational and Experimental Studies of Microstructure-Scale Porosity in Metallic Fuels for Improved Gas Swelling Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mllett, Paul; McDeavitt, Sean; Deo, Chaitanya

    This proposal will investigate the stability of bimodal pore size distributions in metallic uranium and uranium-zirconium alloys during sintering and re-sintering annealing treatments. The project will utilize both computational and experimental approaches. The computational approach includes both Molecular Dynamics simulations to determine the self-diffusion coefficients in pure U and U-Zr alloys in single crystals, grain boundaries, and free surfaces, as well as calculations of grain boundary and free surface interfacial energies. Phase-field simulations using MOOSE will be conducted to study pore and grain structure evolution in microstructures with bimodal pore size distributions. Experiments will also be performed to validate themore » simulations, and measure the time-dependent densification of bimodal porous compacts.« less

  5. Far Field Modeling Methods For Characterizing Surface Detonations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A.

    2015-10-08

    Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less

  6. Inferring epidemiological parameters from phylogenetic information for the HIV-1 epidemic among MSM

    NASA Astrophysics Data System (ADS)

    Quax, Rick; van de Vijver, David A. M. C.; Frentz, Dineke; Sloot, Peter M. A.

    2013-09-01

    The HIV-1 epidemic in Europe is primarily sustained by a dynamic topology of sexual interactions among MSM who have individual immune systems and behavior. This epidemiological process shapes the phylogeny of the virus population. Both fields of epidemic modeling and phylogenetics have a long history, however it remains difficult to use phylogenetic data to infer epidemiological parameters such as the structure of the sexual network and the per-act infectiousness. This is because phylogenetic data is necessarily incomplete and ambiguous. Here we show that the cluster-size distribution indeed contains information about epidemiological parameters using detailed numberical experiments. We simulate the HIV epidemic among MSM many times using the Monte Carlo method with all parameter values and their ranges taken from literature. For each simulation and the corresponding set of parameter values we calculate the likelihood of reproducing an observed cluster-size distribution. The result is an estimated likelihood distribution of all parameters from the phylogenetic data, in particular the structure of the sexual network, the per-act infectiousness, and the risk behavior reduction upon diagnosis. These likelihood distributions encode the knowledge provided by the observed cluster-size distrbution, which we quantify using information theory. Our work suggests that the growing body of genetic data of patients can be exploited to understand the underlying epidemiological process.

  7. Hydrocarbon pyrolysis reactor experimentation and modeling for the production of solar absorbing carbon nanoparticles

    NASA Astrophysics Data System (ADS)

    Frederickson, Lee Thomas

    Much of combustion research focuses on reducing soot particulates in emissions. However, current research at San Diego State University (SDSU) Combustion and Solar Energy Laboratory (CSEL) is underway to develop a high temperature solar receiver which will utilize carbon nanoparticles as a solar absorption medium. To produce carbon nanoparticles for the small particle heat exchange receiver (SPHER), a lab-scale carbon particle generator (CPG) has been built and tested. The CPG is a heated ceramic tube reactor with a set point wall temperature of 1100-1300°C operating at 5-6 bar pressure. Natural gas and nitrogen are fed to the CPG where natural gas undergoes pyrolysis resulting in carbon particles. The gas-particle mixture is met downstream with dilution air and sent to the lab scale solar receiver. To predict soot yield and general trends in CPG performance, a model has been setup in Reaction Design CHEMKIN-PRO software. One of the primary goals of this research is to accurately measure particle properties. Mean particle diameter, size distribution, and index of refraction are calculated using Scanning Electron Microscopy (SEM) and a Diesel Particulate Scatterometer (DPS). Filter samples taken during experimentation are analyzed to obtain a particle size distribution with SEM images processed in ImageJ software. These results are compared with the DPS, which calculates the particle size distribution and the index of refraction from light scattering using Mie theory. For testing with the lab scale receiver, a particle diameter range of 200-500 nm is desired. Test conditions are varied to understand effects of operating parameters on particle size and the ability to obtain the size range. Analysis of particle loading is the other important metric for this research. Particle loading is measured downstream of the CPG outlet and dilution air mixing point. The air-particle mixture flows through an extinction tube where opacity of the mixture is measured with a 532 nm laser and detector. Beer's law is then used to calculate particle loading. The CPG needs to produce a certain particle loading for a corresponding receiver test. By obtaining the particle loading in the system, the reaction conversion to solid carbon in the CPG can be calculated to measure the efficiency of the CPG. To predict trends in reaction conversion and particle size from experimentation, the CHEMKIN-PRO computer model for the CPG is run for various flow rates and wall temperature profiles. These predictions were a reason for testing at higher wall set point temperatures. Based on these research goals, it was shown that the CPG consistently produces a mean particle diameter of 200-400 nm at the conditions tested, fitting perfectly inside the desired range. This led to successful lab scale SPHER testing which produced a 10-point efficiency increase and 150°C temperature difference with particles present. Also, at 3 g/s dilution air flow rate, an efficiency of 80% at an outlet temperature above 800°C was obtained. Promise was shown at higher CPG experimental temperatures to produce higher reaction conversion, both experimentally and in the model. However, based on wall temperature data taken during experimentation, it is apparent that the CPG needs to have multiple heating zones with separate temperature controllers in order to have an isothermal zone rather than a parabolic temperature profile. As for the computer model, it predicted much higher reaction conversion at higher temperature. The mass fraction of fuel in the inlet stream was shown to not affect conversion while increasing residence time led to increasing conversion. Particle size distribution in the model was far off and showed a bimodal distribution for one of the statistical methods. Using the results from experimentation and modeling, a preliminary CPG design is presented that will operate in a 5MWth receiver system.

  8. Formulation of Nanoliposomal Vitamin D3 for Potential Application in Beverage Fortification

    PubMed Central

    Mohammadi, Maryam; Ghanbarzadeh, Babak; Hamishehkar, Hamed

    2014-01-01

    Purpose: Vitamin D, a liposoluble vitamin has many benefits on health. Encapsulation of bioactives in lipid-based carrier systems like nanoliposomes preserves their native properties against oxidation over time along with providing its stable aqueous dispersion. Methods: In the current study, vitamin D3 nanoliposomes were prepared using thin-film hydration-sonication method and fully characterized by different instrumental techniques. Results: According to FTIR and DSC results, no interaction was observed between encapsulated nutraceutical and liposome constituents. The particle size and size distribution (Span value) were calculated 82–90 nm and 0.70–0.85, respectively. TEM analysis showed nano sized globular and bilayer vesicles. In all formations, the encapsulation efficiency of vitamin D3 was calculated more than 93%. Addition of cholesterol to lecithin bilayer increased the negative zeta potential from -29 to -43mV. Conclusion: The results of this study concluded that the liposomal nanoparticles may be introduced as a suitable carrier for fortification of beverages with vitamin D3. PMID:25671191

  9. Theoretical and experimental studies on ionic currents in nanopore-based biosensors.

    PubMed

    Liu, Lei; Li, Chu; Ma, Jian; Wu, Yingdong; Ni, Zhonghua; Chen, Yunfei

    2014-12-01

    Novel generation of analytical technology based on nanopores has provided possibilities to fabricate nanofluidic devices for low-cost DNA sequencing or rapid biosensing. In this paper, a simplified model was suggested to describe DNA molecule's translocation through a nanopore, and the internal potential, ion concentration, ionic flowing speed and ionic current in nanopores with different sizes were theoretically calculated and discussed on the basis of Poisson-Boltzmann equation, Navier-Stokes equation and Nernst-Planck equation by considering several important parameters, such as the applied voltage, the thickness and the electric potential distributions in nanopores. In this way, the basic ionic currents, the modulated ionic currents and the current drops induced by translocation were obtained, and the size effects of the nanopores were carefully compared and discussed based on the calculated results and experimental data, which indicated that nanopores with a size of 10 nm or so are more advantageous to achieve high quality ionic current signals in DNA sensing.

  10. SUB 1-Millimeter Size Fresnel Micro Spectrometer

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon; Koch, Laura; Song, Kyo D.; Park, Sangloon; King, Glen; Choi, Sang

    2010-01-01

    An ultra-small micro spectrometer with less than 1mm diameter was constructed using Fresnel diffraction. The fabricated spectrometer has a diameter of 750 nmicrometers and a focal length of 2.4 mm at 533nm wavelength. The micro spectrometer was built with a simple negative zone plate that has an opaque center with an ecliptic shadow to remove the zero-order direct beam to the aperture slit. Unlike conventional approaches, the detailed optical calculation indicates that the ideal spectral resolution and resolving power do not depend on the miniaturized size but only on the total number of rings. We calculated 2D and 3D photon distribution around the aperture slit and confirmed that improved micro-spectrometers below 1mm size can be built with Fresnel diffraction. The comparison between mathematical simulation and measured data demonstrates the theoretical resolution, measured performance, misalignment effect, and improvement for the sub-1mm Fresnel micro-spectrometer. We suggest the utilization of an array of micro spectrometers for tunable multi-spectral imaging in the ultra violet range.

  11. Regulation of adhesion behavior of murine macrophage using supported lipid membranes displaying tunable mannose domains

    NASA Astrophysics Data System (ADS)

    Kaindl, T.; Oelke, J.; Pasc, A.; Kaufmann, S.; Konovalov, O. V.; Funari, S. S.; Engel, U.; Wixforth, A.; Tanaka, M.

    2010-07-01

    Highly uniform, strongly correlated domains of synthetically designed lipids can be incorporated into supported lipid membranes. The systematic characterization of membranes displaying a variety of domains revealed that the equilibrium size of domains significantly depends on the length of fluorocarbon chains, which can be quantitatively interpreted within the framework of an equivalent dipole model. A mono-dispersive, narrow size distribution of the domains enables us to treat the inter-domain correlations as two-dimensional colloidal crystallization and calculate the potentials of mean force. The obtained results demonstrated that both size and inter-domain correlation can precisely be controlled by the molecular structures. By coupling α-D-mannose to lipid head groups, we studied the adhesion behavior of the murine macrophage (J774A.1) on supported membranes. Specific adhesion and spreading of macrophages showed a clear dependence on the density of functional lipids. The obtained results suggest that such synthetic lipid domains can be used as a defined platform to study how cells sense the size and distribution of functional molecules during adhesion and spreading.

  12. Generating Color from Polydisperse, Near Micron-Sized TiO2 Particles.

    PubMed

    Alam, Al-Mahmnur; Baek, Kyungnae; Son, Jieun; Pei, Yi-Rong; Kim, Dong Ha; Choy, Jin-Ho; Hyun, Jerome K

    2017-07-19

    Single particle Mie calculations of near micron-sized TiO 2 particles predict strong light scattering dominating the visible range that would give rise to a white appearance. We demonstrate that a polydisperse collection of these "white" particles can result in the generation of visible colors through ensemble scattering. The weighted averaging of the scattering over the particle size distribution modifies the sharp, multiple, high order scattering modes from individual particles into broad variations in the collective extinction. These extinction variations are apparent as visible colors for particles suspended in organic solvent at low concentration, or for a monolayer of particles supported on a transparent substrate viewed in front of a white light source. We further exploit the color variations on optical sensitivity to the surrounding environment to promote micron-sized TiO 2 particles as stable and robust agents for detecting the optical index of homogeneous media with high contrast sensitivities. Such distribution-modulated scattering properties provide TiO 2 particles an intriguing opportunity to impart color and optical sensitivity to their widespread electronic and chemical platforms such as antibacterial windows, catalysis, photocatalysis, optical sensors, and photovoltaics.

  13. Exact special twist method for quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro

    2016-12-01

    We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.

  14. Computing Gravitational Fields of Finite-Sized Bodies

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco

    2005-01-01

    A computer program utilizes the classical theory of gravitation, implemented by means of the finite-element method, to calculate the near gravitational fields of bodies of arbitrary size, shape, and mass distribution. The program was developed for application to a spacecraft and to floating proof masses and associated equipment carried by the spacecraft for detecting gravitational waves. The program can calculate steady or time-dependent gravitational forces, moments, and gradients thereof. Bodies external to a proof mass can be moving around the proof mass and/or deformed under thermoelastic loads. An arbitrarily shaped proof mass is represented by a collection of parallelepiped elements. The gravitational force and moment acting on each parallelepiped element of a proof mass, including those attributable to the self-gravitational field of the proof mass, are computed exactly from the closed-form equation for the gravitational potential of a parallelepiped. The gravitational field of an arbitrary distribution of mass external to a proof mass can be calculated either by summing the fields of suitably many point masses or by higher-order Gauss-Legendre integration over all elements surrounding the proof mass that are part of a finite-element mesh. This computer program is compatible with more general finite-element codes, such as NASTRAN, because it is configured to read a generic input data file, containing the detailed description of the finiteelement mesh.

  15. Quantifying Uncertainties in Mass-Dimensional Relationships Through a Comparison Between CloudSat and SPartICus Reflectivity Factors

    NASA Astrophysics Data System (ADS)

    Mascio, J.; Mace, G. G.

    2015-12-01

    CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.

  16. A 3D particle Monte Carlo approach to studying nucleation

    NASA Astrophysics Data System (ADS)

    Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik

    2018-06-01

    The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.

  17. New general pore size distribution model by classical thermodynamics application: Activated carbon

    USGS Publications Warehouse

    Lordgooei, M.; Rood, M.J.; Rostam-Abadi, M.

    2001-01-01

    A model is developed using classical thermodynamics to characterize pore size distributions (PSDs) of materials containing micropores and mesopores. The thermal equation of equilibrium adsorption (TEEA) is used to provide thermodynamic properties and relate the relative pore filling pressure of vapors to the characteristic pore energies of the adsorbent/adsorbate system for micropore sizes. Pore characteristic energies are calculated by averaging of interaction energies between adsorbate molecules and adsorbent pore walls as well as considering adsorbate-adsorbate interactions. A modified Kelvin equation is used to characterize mesopore sizes by considering variation of the adsorbate surface tension and by excluding the adsorbed film layer for the pore size. The modified-Kelvin equation provides similar pore filling pressures as predicted by density functional theory. Combination of these models provides a complete PSD of the adsorbent for the micropores and mesopores. The resulting PSD is compared with the PSDs from Jaroniec and Choma and Horvath and Kawazoe models as well as a first-order approximation model using Polanyi theory. The major importance of this model is its basis on classical thermodynamic properties, less simplifying assumptions in its derivation compared to other methods, and ease of use.

  18. Benefits of polidocanol endovenous microfoam (Varithena®) compared with physician-compounded foams

    PubMed Central

    Carugo, Dario; Ankrett, Dyan N; Zhao, Xuefeng; Zhang, Xunli; Hill, Martyn; O’Byrne, Vincent; Hoad, James; Arif, Mehreen; Wright, David DI

    2015-01-01

    Objective To compare foam bubble size and bubble size distribution, stability, and degradation rate of commercially available polidocanol endovenous microfoam (Varithena®) and physician-compounded foams using a number of laboratory tests. Methods Foam properties of polidocanol endovenous microfoam and physician-compounded foams were measured and compared using a glass-plate method and a Sympatec QICPIC image analysis method to measure bubble size and bubble size distribution, Turbiscan™ LAB for foam half time and drainage and a novel biomimetic vein model to measure foam stability. Physician-compounded foams composed of polidocanol and room air, CO2, or mixtures of oxygen and carbon dioxide (O2:CO2) were generated by different methods. Results Polidocanol endovenous microfoam was found to have a narrow bubble size distribution with no large (>500 µm) bubbles. Physician-compounded foams made with the Tessari method had broader bubble size distribution and large bubbles, which have an impact on foam stability. Polidocanol endovenous microfoam had a lower degradation rate than any physician-compounded foams, including foams made using room air (p < 0.035). The same result was obtained at different liquid to gas ratios (1:4 and 1:7) for physician-compounded foams. In all tests performed, CO2 foams were the least stable and different O2:CO2 mixtures had intermediate performance. In the biomimetic vein model, polidocanol endovenous microfoam had the slowest degradation rate and longest calculated dwell time, which represents the length of time the foam is in contact with the vein, almost twice that of physician-compounded foams using room air and eight times better than physician-compounded foams prepared using equivalent gas mixes. Conclusion Bubble size, bubble size distribution and stability of various sclerosing foam formulations show that polidocanol endovenous microfoam results in better overall performance compared with physician-compounded foams. Polidocanol endovenous microfoam offers better stability and cohesive properties in a biomimetic vein model compared to physician-compounded foams. Polidocanol endovenous microfoam, which is indicated in the United States for treatment of great saphenous vein system incompetence, provides clinicians with a consistent product with enhanced handling properties. PMID:26036246

  19. Benefits of polidocanol endovenous microfoam (Varithena®) compared with physician-compounded foams.

    PubMed

    Carugo, Dario; Ankrett, Dyan N; Zhao, Xuefeng; Zhang, Xunli; Hill, Martyn; O'Byrne, Vincent; Hoad, James; Arif, Mehreen; Wright, David D I; Lewis, Andrew L

    2016-05-01

    To compare foam bubble size and bubble size distribution, stability, and degradation rate of commercially available polidocanol endovenous microfoam (Varithena®) and physician-compounded foams using a number of laboratory tests. Foam properties of polidocanol endovenous microfoam and physician-compounded foams were measured and compared using a glass-plate method and a Sympatec QICPIC image analysis method to measure bubble size and bubble size distribution, Turbiscan™ LAB for foam half time and drainage and a novel biomimetic vein model to measure foam stability. Physician-compounded foams composed of polidocanol and room air, CO2, or mixtures of oxygen and carbon dioxide (O2:CO2) were generated by different methods. Polidocanol endovenous microfoam was found to have a narrow bubble size distribution with no large (>500 µm) bubbles. Physician-compounded foams made with the Tessari method had broader bubble size distribution and large bubbles, which have an impact on foam stability. Polidocanol endovenous microfoam had a lower degradation rate than any physician-compounded foams, including foams made using room air (p < 0.035). The same result was obtained at different liquid to gas ratios (1:4 and 1:7) for physician-compounded foams. In all tests performed, CO2 foams were the least stable and different O2:CO2 mixtures had intermediate performance. In the biomimetic vein model, polidocanol endovenous microfoam had the slowest degradation rate and longest calculated dwell time, which represents the length of time the foam is in contact with the vein, almost twice that of physician-compounded foams using room air and eight times better than physician-compounded foams prepared using equivalent gas mixes. Bubble size, bubble size distribution and stability of various sclerosing foam formulations show that polidocanol endovenous microfoam results in better overall performance compared with physician-compounded foams. Polidocanol endovenous microfoam offers better stability and cohesive properties in a biomimetic vein model compared to physician-compounded foams. Polidocanol endovenous microfoam, which is indicated in the United States for treatment of great saphenous vein system incompetence, provides clinicians with a consistent product with enhanced handling properties. © The Author(s) 2015.

  20. Kinetic energy distribution of multiply charged ions in Coulomb explosion of Xe clusters.

    PubMed

    Heidenreich, Andreas; Jortner, Joshua

    2011-02-21

    We report on the calculations of kinetic energy distribution (KED) functions of multiply charged, high-energy ions in Coulomb explosion (CE) of an assembly of elemental Xe(n) clusters (average size (n) = 200-2171) driven by ultra-intense, near-infrared, Gaussian laser fields (peak intensities 10(15) - 4 × 10(16) W cm(-2), pulse lengths 65-230 fs). In this cluster size and pulse parameter domain, outer ionization is incomplete∕vertical, incomplete∕nonvertical, or complete∕nonvertical, with CE occurring in the presence of nanoplasma electrons. The KEDs were obtained from double averaging of single-trajectory molecular dynamics simulation ion kinetic energies. The KEDs were doubly averaged over a log-normal cluster size distribution and over the laser intensity distribution of a spatial Gaussian beam, which constitutes either a two-dimensional (2D) or a three-dimensional (3D) profile, with the 3D profile (when the cluster beam radius is larger than the Rayleigh length) usually being experimentally realized. The general features of the doubly averaged KEDs manifest the smearing out of the structure corresponding to the distribution of ion charges, a marked increase of the KEDs at very low energies due to the contribution from the persistent nanoplasma, a distortion of the KEDs and of the average energies toward lower energy values, and the appearance of long low-intensity high-energy tails caused by the admixture of contributions from large clusters by size averaging. The doubly averaged simulation results account reasonably well (within 30%) for the experimental data for the cluster-size dependence of the CE energetics and for its dependence on the laser pulse parameters, as well as for the anisotropy in the angular distribution of the energies of the Xe(q+) ions. Possible applications of this computational study include a control of the ion kinetic energies by the choice of the laser intensity profile (2D∕3D) in the laser-cluster interaction volume.

  1. Discovery of the linear region of Near Infrared Diffuse Reflectance spectra using the Kubelka-Munk theory

    NASA Astrophysics Data System (ADS)

    Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng

    2018-05-01

    Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.

  2. Determining the infrared radiative effects of Saharan dust: a radiative transfer modelling study based on vertically resolved measurements at Lampedusa

    NASA Astrophysics Data System (ADS)

    Meloni, Daniela; di Sarra, Alcide; Brogniez, Gérard; Denjean, Cyrielle; De Silvestri, Lorenzo; Di Iorio, Tatiana; Formenti, Paola; Gómez-Amo, José L.; Gröbner, Julian; Kouremeti, Natalia; Liuzzi, Giuliano; Mallet, Marc; Pace, Giandomenico; Sferlazzo, Damiano M.

    2018-03-01

    Detailed measurements of radiation, atmospheric and aerosol properties were carried out in summer 2013 during the Aerosol Direct Radiative Impact on the regional climate in the MEDiterranean region (ADRIMED) campaign in the framework of the Chemistry-Aerosol Mediterranean Experiment (ChArMEx) experiment. This study focusses on the characterization of infrared (IR) optical properties and direct radiative effects of mineral dust, based on three vertical profiles of atmospheric and aerosol properties and IR broadband and narrowband radiation from airborne measurements, made in conjunction with radiosonde and ground-based observations at Lampedusa, in the central Mediterranean. Satellite IR spectra from the Infrared Atmospheric Sounder Interferometer (IASI) are also included in the analysis. The atmospheric and aerosol properties are used as input to a radiative transfer model, and various IR radiation parameters (upward and downward irradiance, nadir and zenith brightness temperature at different altitudes) are calculated and compared with observations. The model calculations are made for different sets of dust particle size distribution (PSD) and refractive index (RI), derived from observations and from the literature. The main results of the analysis are that the IR dust radiative forcing is non-negligible and strongly depends on PSD and RI. When calculations are made using the in situ measured size distribution, it is possible to identify the refractive index that produces the best match with observed IR irradiances and brightness temperatures (BTs). The most appropriate refractive indices correspond to those determined from independent measurements of mineral dust aerosols from the source regions (Tunisia, Algeria, Morocco) of dust transported over Lampedusa, suggesting that differences in the source properties should be taken into account. With the in situ size distribution and the most appropriate refractive index the estimated dust IR radiative forcing efficiency is +23.7 W m-2 at the surface, -7.9 W m-2 within the atmosphere, and +15.8 W m-2 at the top of the atmosphere. The use of column-integrated dust PSD from AERONET may also produce a good agreement with measured irradiances and BTs, but with significantly different values of the RI. This implies large differences, up to a factor of 2.5 at surface, in the estimated dust radiative forcing, and in the IR heating rate. This study shows that spectrally resolved measurements of BTs are important to better constrain the dust IR optical properties, and to obtain a reliable estimate of its radiative effects. Efforts should be directed at obtaining an improved description of the dust size distribution and its vertical distribution, as well as at including regionally resolved optical properties.

  3. Making the decoy-state measurement-device-independent quantum key distribution practically useful

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Heng; Yu, Zong-Wen; Wang, Xiang-Bin

    2016-04-01

    The relatively low key rate seems to be the major barrier to its practical use for the decoy-state measurement-device-independent quantum key distribution (MDI-QKD). We present a four-intensity protocol for the decoy-state MDI-QKD that hugely raises the key rate, especially in the case in which the total data size is not large. Also, calculations show that our method makes it possible for secure private communication with fresh keys generated from MDI-QKD with a delay time of only a few seconds.

  4. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  5. Two years experience with quality assurance protocol for patient related Rapid Arc treatment plan verification using a two dimensional ionization chamber array

    PubMed Central

    2011-01-01

    Purpose To verify the dose distribution and number of monitor units (MU) for dynamic treatment techniques like volumetric modulated single arc radiation therapy - Rapid Arc - each patient treatment plan has to be verified prior to the first treatment. The purpose of this study was to develop a patient related treatment plan verification protocol using a two dimensional ionization chamber array (MatriXX, IBA, Schwarzenbruck, Germany). Method Measurements were done to determine the dependence between response of 2D ionization chamber array, beam direction, and field size. Also the reproducibility of the measurements was checked. For the patient related verifications the original patient Rapid Arc treatment plan was projected on CT dataset of the MatriXX and the dose distribution was calculated. After irradiation of the Rapid Arc verification plans measured and calculated 2D dose distributions were compared using the gamma evaluation method implemented in the measuring software OmniPro (version 1.5, IBA, Schwarzenbruck, Germany). Results The dependence between response of 2D ionization chamber array, field size and beam direction has shown a passing rate of 99% for field sizes between 7 cm × 7 cm and 24 cm × 24 cm for measurements of single arc. For smaller and larger field sizes than 7 cm × 7 cm and 24 cm × 24 cm the passing rate was less than 99%. The reproducibility was within a passing rate of 99% and 100%. The accuracy of the whole process including the uncertainty of the measuring system, treatment planning system, linear accelerator and isocentric laser system in the treatment room was acceptable for treatment plan verification using gamma criteria of 3% and 3 mm, 2D global gamma index. Conclusion It was possible to verify the 2D dose distribution and MU of Rapid Arc treatment plans using the MatriXX. The use of the MatriXX for Rapid Arc treatment plan verification in clinical routine is reasonable. The passing rate should be 99% than the verification protocol is able to detect clinically significant errors. PMID:21342509

  6. Synchronization of cyclic power grids: Equilibria and stability of the synchronous state

    NASA Astrophysics Data System (ADS)

    Xi, Kaihua; Dubbeldam, Johan L. A.; Lin, Hai Xiang

    2017-01-01

    Synchronization is essential for the proper functioning of power grids; we investigate the synchronous states and their stability for cyclic power grids. We calculate the number of stable equilibria and investigate both the linear and nonlinear stabilities of the synchronous state. The linear stability analysis shows that the stability of the state, determined by the smallest nonzero eigenvalue, is inversely proportional to the size of the network. We use the energy barrier to measure the nonlinear stability and calculate it by comparing the potential energy of the type-1 saddles with that of the stable synchronous state. We find that the energy barrier depends on the network size (N) in a more complicated fashion compared to the linear stability. In particular, when the generators and consumers are evenly distributed in an alternating way, the energy barrier decreases to a constant when N approaches infinity. For a heterogeneous distribution of generators and consumers, the energy barrier decreases with N. The more heterogeneous the distribution is, the stronger the energy barrier depends on N. Finally, we find that by comparing situations with equal line loads in cyclic and tree networks, tree networks exhibit reduced stability. This difference disappears in the limit of N →∞ . This finding corroborates previous results reported in the literature and suggests that cyclic (sub)networks may be applied to enhance power transfer while maintaining stable synchronous operation.

  7. Comparison of FDTD-calculated specific absorption rate in adults and children when using a mobile phone at 900 and 1800 MHz

    NASA Astrophysics Data System (ADS)

    Martínez-Búrdalo, M.; Martín, A.; Anguiano, M.; Villar, R.

    2004-01-01

    In this paper, the specific absorption rate (SAR) in scaled human head models is analysed to study possible differences between SAR in the heads of adults and children and for assessment of compliance with the international safety guidelines, while using a mobile phone. The finite-difference time-domain method (FDTD) has been used for calculating SAR values for models of both children and adults, at 900 and 1800 MHz. Maximum 1 g averaged SAR (SAR1 g) and maximum 10 g averaged SAR (SAR10 g) have been calculated in adults and scaled head models for comparison and assessment of compliance with ANSI/IEEE and European guidelines. Results show that peak SAR1 g and peak SAR10 g all trend downwards with decreasing head size but as head size decreases, the percentage of energy absorbed in the brain increases. So, higher SAR in children's brains can be expected depending on whether the thickness of their skulls and surrounding tissues actually depends on age. The SAR in eyes of different sizes, as a critical organ, has also been studied and very similar distributions for the full size and the scaled models have been obtained. Standard limits can only be exceeded in the unpractical situation where the antenna is located at a very short distance in front of the eye.

  8. Linking physics with physiology in TMS: a sphere field model to determine the cortical stimulation site in TMS.

    PubMed

    Thielscher, Axel; Kammer, Thomas

    2002-11-01

    A fundamental problem of transcranial magnetic stimulation (TMS) is determining the site and size of the stimulated cortical area. In the motor system, the most common procedure for this is motor mapping. The obtained two-dimensional distribution of coil positions with associated muscle responses is used to calculate a center of gravity on the skull. However, even in motor mapping the exact stimulation site on the cortex is not known and only rough estimates of its size are possible. We report a new method which combines physiological measurements with a physical model used to predict the electric field induced by the TMS coil. In four subjects motor responses in a small hand muscle were mapped with 9-13 stimulation sites at the head perpendicular to the central sulcus in order to keep the induced current direction constant in a given cortical region of interest. Input-output functions from these head locations were used to determine stimulator intensities that elicit half-maximal muscle responses. Based on these stimulator intensities the field distribution on the individual cortical surface was calculated as rendered from anatomical MR data. The region on the cortical surface in which the different stimulation sites produced the same electric field strength (minimal variance, 4.2 +/- 0.8%.) was determined as the most likely stimulation site on the cortex. In all subjects, it was located at the lateral part of the hand knob in the motor cortex. Comparisons of model calculations with the solutions obtained in this manner reveal that the stimulated cortex area innervating the target muscle is substantially smaller than the size of the electric field induced by the coil. Our results help to resolve fundamental questions raised by motor mapping studies as well as motor threshold measurements.

  9. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-09

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.

  10. Model of the multipolar engine with decreased cogging torque by asymmetrical distribution of the magnets

    NASA Astrophysics Data System (ADS)

    Goryca, Zbigniew; Paduszyński, Kamil; Pakosz, Artur

    2018-03-01

    This paper presents the results of field calculations of cogging torque for a 12-pole torque motor with an 18-slot stator. A constant angular velocity magnet and the same size gap between n-1 magnets were assumed. In these conditions, the effect of change of the n-th gap between magnets on the cogging torque was tested. Due to considerable length of the machine the calculations were performed using a 2D model. The n-th gap for which the cogging torque assumed the lowest value was evaluated. The cogging torque of the machine with symmetrical magnetic circuit (the same size of gap between magnets) was compared to the one of the asymmetrical machine. With proper choice of asymmetry, the cogging torque for the machine decreased by four times.

  11. Type of Material in the Pipes Overhead Power Lines Impact on the Distribution on the Size of the Overhang and the Tension

    NASA Astrophysics Data System (ADS)

    Pawlak, Urszula; Pawlak, Marcin

    2017-10-01

    The article presents the material type from which the conductors of the overhead power lines are produced influences on the size of the overhang and the tension. The aim of the calculations was to present the benefits of the mechanics of the cable resulting from the type of cable used. The analysis was performed for two types of cables: aluminium with steel core and aluminium with composite core, twice span power line section. 10 different conductor-to-strand coil, wind, icing, and temperature variations were included in the calculations. The string description was made by means of a chain curve, while the horizontal component H of the tension force was determined using the bisection method. The loads were collected in accordance with applicable Eurocode.

  12. Radiation breakage of DNA: a model based on random-walk chromatin structure

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Sachs, R. K.

    2001-01-01

    Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.

  13. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  14. Hydrothermal carbonization for the preparation of hydrochars from glucose, cellulose, chitin, chitosan and wood chips via low-temperature and their characterization.

    PubMed

    Simsir, Hamza; Eltugral, Nurettin; Karagoz, Selhan

    2017-12-01

    In this work, the hydrothermal carbonization of glucose, cellulose, chitin, chitosan and wood chips at 200°C at processing times between 6 and 48h was studied. The carbonization degree of wood chips, cellulose and chitosan obviously increases as function of time. The heating value of glucose increases to 88% upon carbonization for 48h, while it is only 5% for chitin. It is calculated to be between 44 and 73% for wood chips, chitosan and cellulose. Glucose yielded complete formation of spherical hydrochar structures at a shorter processing time, as low as 12h. However, carbon spheres with narrow size (∼560nm) distribution were obtained upon 48h of residence time. Cellulose and wood chips yielded a similar morphology with an irregular size distribution. Chitin seemed not to undergo hydrothermal carbonization, whereas densely aggregated spheres of a uniform size around 42nm were obtained from chitosan after 18h. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Therapeutic analysis of high-dose-rate {sup 192}Ir vaginal cuff brachytherapy for endometrial cancer using a cylindrical target volume model and varied cancer cell distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.

    Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less

  16. Measurements of Gas-phase H2so4, Oh, So2 and Aerosol Size Distribution On Mount Zugspitze At The Schneefernerhaus: Estimation of Sources and Sinks of Sulfuric Acid

    NASA Astrophysics Data System (ADS)

    Uecker, J.; Hanke, M.; Kamm, S.; Umann, B.; Arnold, F.; Poeschl, U.; Niessner, R.

    Gas-phase sulfuric acid and OH have been measured by the novel MPI-K ULTRA- CIMS (ultra-trace gas detection by CIMS technique) at the Schneefernerhaus( 2750 m asl; below the summit of Mount Zugspitze, Germany) in October 2001. These mea- surements were accompanied by measurements of SO2 with another MPI-K CIMS instrument and aerosol size distribution measurements by DMPS (differential mobil- ity particle sizer) operated by the Institut fuer Wasserchemie (Technische Universitaet Muenchen). In that way a data set was obtained which allows investigating major sources and sinks of sulfuric acid under relative clean conditions. H2SO4 and espe- cially OH concentrations are relatively well correlated to solar flux. Noon maximum concentrations of OH and H2SO4 of 6.5·106 and 2·106 cm-3, respectively, were ob- served. The average SO2 concentrations were below 20 ppt. The aerosol size distribu- tion was obtained in 39 size ranges from 10 to 1056 nm. Typical aerosol concentrations are in the range of 400 to 1800 cm-3 during the discussed period of time. An estima- tion of the production rate of H2SO4 was inferred building on the reaction of SO2 and OH, while the loss rate was calculated by considering the condensation of H2SO4 on aerosol particles (Fuchs and Sutugin approach). Results of the measurements and calculations will be discussed.

  17. On the calculation of the energies of dissociation, cohesion, vacancy formation, electron attachment, and the ionization potential of small metallic clusters containing a monovacancy

    NASA Astrophysics Data System (ADS)

    Pogosov, V. V.; Reva, V. I.

    2017-09-01

    In terms of the model of stable jellium, self-consistent calculations of spatial distributions of electrons and potentials, as well as of energies of dissociation, cohesion, vacancy formation, electron attachment, and ionization potentials of solid clusters of Mg N , Li N (with N ≤ 254 ) and of clusters containing a vacancy ( N ≥ 12) have been performed. The contribution of a monovacancy to the energy of the cluster and size dependences of its characteristics and of asymptotics have been discussed. Calculations have been performed using a SKIT-3 cluster at Glushkov Institute of Cybernetics, National Academy of Sciences, Ukraine (Rpeak = 7.4 Tflops).

  18. Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays

    NASA Astrophysics Data System (ADS)

    Beecken, B. P.; Fossum, E. R.

    1996-07-01

    Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.

  19. Island size distribution with hindered aggregation

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Camargo, Manuel; Sánchez, Julián A.

    2018-05-01

    We study the effect of hindered aggregation on the island formation processes for a one-dimensional model of epitaxial growth with arbitrary nucleus size i . In the proposed model, the attachment of monomers to islands is hindered by an aggregation barrier, ɛa, which decreases the hopping rate of monomers to the islands. As ɛa increases, the system exhibits a crossover between two different regimes; namely, from diffusion-limited aggregation to attachment-limited aggregation. The island size distribution, P (s ) , is calculated for different values of ɛa by a self-consistent approach involving the nucleation and aggregation capture kernels. The results given by the analytical model are compared with those from kinetic Monte Carlo simulations, finding a close agreement between both sets of data for all considered values of i and ɛa. As the aggregation barrier increases, the spatial effect of fluctuations on the density of monomers can be neglected and P (s ) smoothly approximates to the limit distribution P (s ) =δs ,i +1 . In the crossover regime the system features a complex and rich behavior, which can be explained in terms of the characteristic timescales of different microscopic processes.

  20. Determination of Atmospheric Aerosol Characteristics from the Polarization of Scattered Radiation

    NASA Technical Reports Server (NTRS)

    Harris, F. S., Jr.; McCormick, M. P.

    1973-01-01

    Aerosols affect the polarization of radiation in scattering, hence measured polarization can be used to infer the nature of the particles. Size distribution, particle shape, real and absorption parts of the complex refractive index affect the scattering. From Lorenz-Mie calculations of the 4-Stokes parameters as a function of scattering angle for various wavelengths the following polarization parameters were plotted: total intensity, intensity of polarization in plane of observation, intensity perpendicular to the plane of observation, polarization ratio, polarization (using all 4-Stokes parameters), plane of the polarization ellipse and its ellipticity. A six-component log-Gaussian size distribution model was used to study the effects of the nature of the polarization due to variations in the size distribution and complex refractive index. Though a rigorous inversion from measurements of scattering to detailed specification of aerosol characteristics is not possible, considerable information about the nature of the aerosols can be obtained. Only single scattering from aerosols was used in this paper. Also, the background due to Rayleigh gas scattering, the reduction of effects as a result of multiple scattering and polarization effects of possible ground background (airborne platforms) were not included.

  1. Influence of fundamental mode fill factor on disk laser output power and laser beam quality

    NASA Astrophysics Data System (ADS)

    Cheng, Zhiyong; Yang, Zhuo; Shao, Xichun; Li, Wei; Zhu, Mengzhen

    2017-11-01

    An three-dimensional numerical model based on finite element method and Fox-Li method with angular spectrum diffraction theoy is developed to calculate the output power and power density distribution of Yb:YAG disk laser. We invest the influence of fundamental mode fill factor(the ratio of fundamental mode size and pump spot size) on the output power and laser beam quality. Due to aspherical aberration and soft aperture effect in laser disk, high beam quality can be achieve with relative lower efficiency. The highest output power of fundamental laser mode is influenced by the fundamental mode fill factor. Besides we find that optimal mode fill factor increase with pump spot size.

  2. Determining the Effective Density and Stabilizer Layer Thickness of Sterically Stabilized Nanoparticles

    PubMed Central

    2016-01-01

    A series of model sterically stabilized diblock copolymer nanoparticles has been designed to aid the development of analytical protocols in order to determine two key parameters: the effective particle density and the steric stabilizer layer thickness. The former parameter is essential for high resolution particle size analysis based on analytical (ultra)centrifugation techniques (e.g., disk centrifuge photosedimentometry, DCP), whereas the latter parameter is of fundamental importance in determining the effectiveness of steric stabilization as a colloid stability mechanism. The diblock copolymer nanoparticles were prepared via polymerization-induced self-assembly (PISA) using RAFT aqueous emulsion polymerization: this approach affords relatively narrow particle size distributions and enables the mean particle diameter and the stabilizer layer thickness to be adjusted independently via systematic variation of the mean degree of polymerization of the hydrophobic and hydrophilic blocks, respectively. The hydrophobic core-forming block was poly(2,2,2-trifluoroethyl methacrylate) [PTFEMA], which was selected for its relatively high density. The hydrophilic stabilizer block was poly(glycerol monomethacrylate) [PGMA], which is a well-known non-ionic polymer that remains water-soluble over a wide range of temperatures. Four series of PGMAx–PTFEMAy nanoparticles were prepared (x = 28, 43, 63, and 98, y = 100–1400) and characterized via transmission electron microscopy (TEM), dynamic light scattering (DLS), and small-angle X-ray scattering (SAXS). It was found that the degree of polymerization of both the PGMA stabilizer and core-forming PTFEMA had a strong influence on the mean particle diameter, which ranged from 20 to 250 nm. Furthermore, SAXS was used to determine radii of gyration of 1.46 to 2.69 nm for the solvated PGMA stabilizer blocks. Thus, the mean effective density of these sterically stabilized particles was calculated and determined to lie between 1.19 g cm–3 for the smaller particles and 1.41 g cm–3 for the larger particles; these values are significantly lower than the solid-state density of PTFEMA (1.47 g cm–3). Since analytical centrifugation requires the density difference between the particles and the aqueous phase, determining the effective particle density is clearly vital for obtaining reliable particle size distributions. Furthermore, selected DCP data were recalculated by taking into account the inherent density distribution superimposed on the particle size distribution. Consequently, the true particle size distributions were found to be somewhat narrower than those calculated using an erroneous single density value, with smaller particles being particularly sensitive to this artifact. PMID:27478250

  3. A hydrodynamic mechanism of meteor ablation. The melt-spraying model

    NASA Astrophysics Data System (ADS)

    Girin, Oleksandr G.

    2017-10-01

    Context. Hydrodynamic conditions are similar in a molten meteoroid and a liquid drop in a high-speed airflow. Despite the fact that the latter is well-studied, both experimentally and theoretically, hydrodynamic instability theory has not been applied to study the fragmentation of molten meteoroids. Aims: We aim to treat quasi-continuous spraying of meteoroid melt due to hydrodynamic instability as a possible mechanism of ablation. Our objectives are to calculate the time development of particle release, the released particle sizes and their distribution by sizes, as well as the meteoroid mass loss law. Methods: We have applied gradient instability theory to model the behaviour of the meteoroid melt layer and its interaction with the atmosphere. We have assumed a spherical meteoroid and that the meteoroid has a shallow entry angle, such that the density of the air stream interacting with the meteoroid is nearly constant. Results: High-frequency spraying of the molten meteoroid is numerically simulated. The intermediate and final size distributions of released particles are calculated, as well as the meteoroid mass loss law. Fast and slow meteoroids of iron and stone compositions are modelled, resulting in significant differences in the size distribution of melt particles sprayed from each meteoroid. Less viscous iron melt produces finer particles and a denser aerosol wake than a stony one does. Conclusions: Analysis of the critical conditions for the gradient instability mechanism shows that the dynamic pressure of the air-stream at heights up to 100 km is sufficient to overcome surface tension forces and pull out liquid particles from the meteoroid melt by means of unstable disturbances. Hence, the proposed melt-spraying model is able to explain quasi-continuous mode of meteoroid fragmentation at large heights and low dynamic pressures. A closed-form solution of the meteoroid ablation problem is obtained due to the melt-spraying model usage, at the meteoroid composition, initial radius and velocity being given. The movies associated to Figs. 6 and 7 are available at http://www.aanda.org

  4. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  5. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less

  6. TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, J; Culberson, W; Bender, E

    2016-06-15

    Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less

  7. Determination of Residual Stress Distributions in Polycrystalline Alumina using Fluorescence Microscopy

    PubMed Central

    Michaels, Chris A.; Cook, Robert F.

    2016-01-01

    Maps of residual stress distributions arising from anisotropic thermal expansion effects in a polycrystalline alumina are generated using fluorescence microscopy. The shifts of both the R1 and R2 ruby fluorescence lines of Cr in alumina are used to create maps with sub-µm resolution of either the local mean and shear stresses or local crystallographic a- and c-stresses in the material, with approximately ± 1 MPa stress resolution. The use of single crystal control materials and explicit correction for temperature and composition effects on line shifts enabled determination of the absolute values and distributions of values of stresses. Temperature correction is shown to be critical in absolute stress determination. Experimental determinations of average stress parameters in the mapped structure are consistent with assumed equilibrium conditions and with integrated large-area measurements. Average crystallographic stresses of order hundreds of MPa are determined with characteristic distribution widths of tens of MPa. The stress distributions reflect contributions from individual clusters of stress in the structure; the cluster size is somewhat larger than the grain size. An example application of the use of stress maps is shown in the calculation of stress-intensity factors for fracture in the residual stress field. PMID:27563163

  8. Retrieve Optically Thick Ice Cloud Microphysical Properties by Using Airborne Dual-Wavelength Radar Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Zhien; Heymsfield, Gerald M.; Li, Lihua; Heymsfield, Andrew J.

    2005-01-01

    An algorithm to retrieve optically thick ice cloud microphysical property profiles is developed by using the GSFC 9.6 GHz ER-2 Doppler Radar (EDOP) and the 94 GHz Cloud Radar System (CRS) measurements aboard the high-altitude ER-2 aircraft. In situ size distribution and total water content data from the CRYSTAL-FACE field campaign are used for the algorithm development. To reduce uncertainty in calculated radar reflectivity factors (Ze) at these wavelengths, coincident radar measurements and size distribution data are used to guide the selection of mass-length relationships and to deal with the density and non-spherical effects of ice crystals on the Ze calculations. The algorithm is able to retrieve microphysical property profiles of optically thick ice clouds, such as, deep convective and anvil clouds, which are very challenging for single frequency radar and lidar. Examples of retrieved microphysical properties for a deep convective clouds are presented, which show that EDOP and CRS measurements provide rich information to study cloud structure and evolution. Good agreement between IWPs derived from an independent submillimeter-wave radiometer, CoSSIR, and dual-wavelength radar measurements indicates accuracy of the IWC retrieved from the two-frequency radar algorithm.

  9. Scattered radiation from dental metallic crowns in head and neck radiotherapy.

    PubMed

    Shimozato, T; Igarashi, Y; Itoh, Y; Yamamoto, N; Okudaira, K; Tabushi, K; Obata, Y; Komori, M; Naganawa, S; Ueda, M

    2011-09-07

    We aimed to estimate the scattered radiation from dental metallic crowns during head and neck radiotherapy by irradiating a jaw phantom with external photon beams. The phantom was composed of a dental metallic plate and hydroxyapatite embedded in polymethyl methacrylate. We used radiochromic film measurement and Monte Carlo simulation to calculate the radiation dose and dose distribution inside the phantom. To estimate dose variations in scattered radiation under different clinical situations, we altered the incident energy, field size, plate thickness, plate depth and plate material. The simulation results indicated that the dose at the incident side of the metallic dental plate was approximately 140% of that without the plate. The differences between dose distributions calculated with the radiation treatment-planning system (TPS) algorithms and the data simulation, except around the dental metallic plate, were 3% for a 4 MV photon beam. Therefore, we should carefully consider the dose distribution around dental metallic crowns determined by a TPS.

  10. Scattered radiation from dental metallic crowns in head and neck radiotherapy

    NASA Astrophysics Data System (ADS)

    Shimozato, T.; Igarashi, Y.; Itoh, Y.; Yamamoto, N.; Okudaira, K.; Tabushi, K.; Obata, Y.; Komori, M.; Naganawa, S.; Ueda, M.

    2011-09-01

    We aimed to estimate the scattered radiation from dental metallic crowns during head and neck radiotherapy by irradiating a jaw phantom with external photon beams. The phantom was composed of a dental metallic plate and hydroxyapatite embedded in polymethyl methacrylate. We used radiochromic film measurement and Monte Carlo simulation to calculate the radiation dose and dose distribution inside the phantom. To estimate dose variations in scattered radiation under different clinical situations, we altered the incident energy, field size, plate thickness, plate depth and plate material. The simulation results indicated that the dose at the incident side of the metallic dental plate was approximately 140% of that without the plate. The differences between dose distributions calculated with the radiation treatment-planning system (TPS) algorithms and the data simulation, except around the dental metallic plate, were 3% for a 4 MV photon beam. Therefore, we should carefully consider the dose distribution around dental metallic crowns determined by a TPS.

  11. Reconstructing Tsunami Flow Speed from Sedimentary Deposits

    NASA Astrophysics Data System (ADS)

    Jaffe, B. E.; Gelfenbaum, G. R.

    2014-12-01

    Paleotsunami deposits contain information about the flow that created them that can be used to reconstruct tsunami flow speed and thereby improving assessment of tsunami hazard. We applied an inverse tsunami sediment transport model to sandy deposits near Sendai Airport, Japan, that formed during the 11 March 2011 Tohoku-oki tsunami to test model performance and explore the spatial variations in tsunami flow speed. The inverse model assumes the amount of suspended sediment in the water column is in equilibrium with local flow speed and that sediment transport convergences, primarily from bedload transport, do not contribute significantly to formation of the portion of the deposit we identify as formed by sediment settling out of suspension. We interpret massive or inversely graded intervals as forming from sediment transport convergences and do not model them. Sediment falling out of suspension forms a specific type of normal grading, termed 'suspension' grading, where the entire grain size distribution shifts to finer sizes higher up in a deposit. Suspension grading is often observed in deposits of high-energy flows, including turbidity currents and tsunamis. The inverse model calculates tsunami flow speed from the thickness and bulk grain size of a suspension-graded interval. We identified 24 suspension-graded intervals from 7 trenches located near the Sendai Airport from ~250-1350 m inland from the shoreline. Flow speeds were highest ~500 m from the shoreline, landward of the forested sand dunes where the tsunami encountered lower roughness in a low-lying area as it traveled downslope. Modeled tsunami flow speeds range from 2.2 to 9.0 m/s. Tsunami flow speeds are sensitive to roughness, which is unfortunately poorly constrained. Flow speed calculated by the inverse model was similar to those calculated from video taken from a helicopter about 1-2 km inland. Deposit reconstructions of suspension-graded intervals reproduced observed upward shifts in grain size distributions reasonably well. As approaches to estimating paleo-roughness improve, the flow speed and size of paleotsunamis will be better understood and the ability to assess tsunami hazard from paleotsunami deposits will improve.

  12. Modeling mobile source emissions during traffic jams in a micro urban environment.

    PubMed

    Kondrashov, Valery V; Reshetin, Vladimir P; Regens, James L; Gunter, James T

    2002-01-01

    Urbanization typically involves a continuous increase in motor vehicle use, resulting in congestion known as traffic jams. Idling emissions due to traffic jams combine with the complex terrain created by buildings to concentrate atmospheric pollutants in localized areas. This research simulates emissions concentrations and distributions for a congested street in Minsk, Belarus. Ground-level (up to 50-meters above the street's surface) pollutant concentrations were calculated using STAR (version 3.10) with emission factors obtained from the U.S. Environmental Protection Agency, wind speed and direction, and building location and size. Relative emissions concentrations and distributions were simulated at 1-meter and 10-meters above street level. The findings demonstrate the importance of wind speed and direction, and building size and location on emissions concentrations and distributions, with the leeward sides of buildings retaining up to 99 percent of the emitted pollutants within 1-meter of street level, and up to 77 percent 10-meters above the street.

  13. Polarized Optical Scattering Measurements of Metallic Nanoparticles on a Thin Film Silicon Wafer

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Yang; Liu, Tze-An; Fu, Wei-En

    2009-09-01

    Light scattering has shown its powerful diagnostic capability to characterize optical quality surfaces. In this study, the theory of bidirectional reflectance distribution function (BRDF) was used to analyze the metallic nanoparticles' sizes on wafer surfaces. The BRDF of a surface is defined as the angular distribution of radiance scattered by the surface normalized by the irradiance incident on the surface. A goniometric optical scatter instrument has been developed to perform the BRDF measurements on polarized light scattering on wafer surfaces for the diameter and distribution measurements of metallic nanoparticles. The designed optical scatter instrument is capable of distinguishing various types of optical scattering characteristics, which are corresponding to the diameters of the metallic nanoparticles, near surfaces by using the Mueller matrix calculation. The metallic nanoparticle diameter of measurement is 60 nm on 2 inch thin film wafers. These measurement results demonstrate that the polarization of light scattered by metallic particles can be used to determine the size of metallic nanoparticles on silicon wafers.

  14. Oceanic Gas Bubble Measurements Using an Acoustic Bubble Spectrometer

    NASA Astrophysics Data System (ADS)

    Wilson, S. J.; Baschek, B.; Deane, G.

    2008-12-01

    Gas bubble injection by breaking waves contributes significantly to the exchange of gases between atmosphere and ocean at high wind speeds. In this respect, CO2 is primarily important for the global ocean and climate, while O2 is especially relevant for ecosystems in the coastal ocean. For measuring oceanic gas bubble size distributions, a commercially available Dynaflow Acoustic Bubble Spectrometer (ABS) has been modified. Two hydrophones transmit and receive selected frequencies, measuring attenuation and absorption. Algorithms are then used to derive bubble size distributions. Tank test were carried out in order to test the instrument performance.The software algorithms were compared with Commander and Prosperetti's method (1989) of calculating sound speed ratio and attenuation for a known bubble distribution. Additional comparisons with micro-photography were carried out in the lab and will be continued during the SPACE '08 experiment in October 2008 at Martha's Vineyard Coastal Observatory. The measurements of gas bubbles will be compared to additional parameters, such as wind speed, wave height, white cap coverage, or dissolved gases.

  15. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  16. Supersaturation, droplet spectra, and turbulent mixing in clouds

    NASA Technical Reports Server (NTRS)

    Gerber, H.

    1990-01-01

    Much effort has recently gone into explaining the observed broad precoalescence size distribution of droplets in cloud and fogs, because this differs from the results of condensational growth calculations which lead to much narrower distributions. A good example of droplet size-distribution broadening was observed on flight 17 (25 July) of the NRL tethered balloon during the 1987 FIRE San Nicolas Island IFO. These observations caused the interactions between cloud microphysics and turbulent mixing to be re-examined. The findings of Broadwell and Breidenthal (1982) who conducted laboratory and theoretical studies of mixing in shear flow, and those of Baker et al. (1984) who applied the earlier work to mixing in clouds, were used. Rather than looking at the 25 July case at SNI, earlier fog observations made at SUNY (6 Oct. 1982) which also indicated that shear-induced mixing was taking place, and which had a better collection of microphysical measurements including more precise supersaturation measurements and detailed vertical profiles of meteorological parameters were chosen instead.

  17. Application of asymmetric flow-field flow fractionation to the characterization of colloidal dispersions undergoing aggregation.

    PubMed

    Lattuada, Marco; Olivo, Carlos; Gauer, Cornelius; Storti, Giuseppe; Morbidelli, Massimo

    2010-05-18

    The characterization of complex colloidal dispersions is a relevant and challenging problem in colloidal science. In this work, we show how asymmetric flow-field flow fractionation (AF4) coupled to static light scattering can be used for this purpose. As an example of complex colloidal dispersions, we have chosen two systems undergoing aggregation. The first one is a conventional polystyrene latex undergoing reaction-limited aggregation, which leads to the formation of fractal clusters with well-known structure. The second one is a dispersion of elastomeric colloidal particles made of a polymer with a low glass transition temperature, which undergoes coalescence upon aggregation. Samples are withdrawn during aggregation at fixed times, fractionated with AF4 using a two-angle static light scattering unit as a detector. We have shown that from the analysis of the ratio between the intensities of the scattered light at the two angles the cluster size distribution can be recovered, without any need for calibration based on standard elution times, provided that the geometry and scattering properties of particles and clusters are known. The nonfractionated samples have been characterized also by conventional static and dynamic light scattering to determine their average radius of gyration and hydrodynamic radius. The size distribution of coalescing particles has been investigated also through image analysis of cryo-scanning electron microscopy (SEM) pictures. The average radius of gyration and the average hydrodynamic radius of the nonfractionated samples have been calculated and successfully compared to the values obtained from the size distributions measured by AF4. In addition, the data obtained are also in good agreement with calculations made with population balance equations.

  18. Measuring and Modeling Root Distribution and Root Reinforcement in Forested Slopes for Slope Stability Calculations

    NASA Astrophysics Data System (ADS)

    Cohen, D.; Giadrossich, F.; Schwarz, M.; Vergani, C.

    2016-12-01

    Roots provide mechanical anchorage and reinforcement of soils on slopes. Roots also modify soil hydrological properties (soil moisture content, pore-water pressure, preferential flow paths) via subsurface flow path associated with root architecture, root density, and root-size distribution. Interactions of root-soil mechanical and hydrological processes are an important control of shallow landslide initiation during rainfall events and slope stability. Knowledge of root-distribution and root strength are key components to estimate slope stability in vegetated slopes and for the management of protection forest in steep mountainous area. We present data that show the importance of measuring root strength directly in the field and present methods for these measurements. These data indicate that the tensile force mobilized in roots depends on root elongation (a function of soil displacement), root size, and on whether roots break in tension of slip out of the soil. Measurements indicate that large lateral roots that cross tension cracks at the scarp are important for slope stability calculations owing to their large tensional resistance. These roots are often overlooked and when included, their strength is overestimated because extrapolated from measurements on small roots. We present planned field experiments that will measure directly the force held by roots of different sizes during the triggering of a shallow landslide by rainfall. These field data are then used in a model of root reinforcement based on fiber-bundle concepts that span different spacial scales, from a single root to the stand scale, and different time scales, from timber harvest to root decay. This model computes the strength of root bundles in tension and in compression and their effect on soil strength. Up-scaled to the stand the model yields the distribution of root reinforcement as a function of tree density, distance from tree, tree species and age with the objective of providing quantitative estimates of tree root reinforcement for best management practice of protection forests.

  19. DETACHMENT OF BACTERIOPHAGE FROM ITS CARRIER PARTICLES

    PubMed Central

    Hetler, D. M.; Bronfenbrenner, J.

    1931-01-01

    The active substance (phage) present in the lytic broth filtrate is distributed through the medium in the form of particles. These particles vary in size within broad limits. The average size of these particles as calculated on the basis of the rate of diffusion approximates 4.4 mµ in radius. Fractionation by means of ultrafiltration permits partial separation of particles of different sizes. Under conditions of experiments here reported the particles varied in the radius size from 0.6 mµ to 11.4 mµ. The active agent apparently is not intimately identified with these particles. It is merely carried by them by adsorption, and under suitable experimental conditions it can be detached from the larger particles and redistributed on smaller particles of the medium. PMID:19872604

  20. The charge imbalance in ultracold plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tianxing; Lu, Ronghua, E-mail: lurh@siom.ac.cn; Guo, Li

    2016-09-15

    Ultracold plasmas are regarded as quasineutral but not strictly neutral. The results of charge imbalance in the expansion of ultracold plasmas are reported. The calculations are performed by a full molecular-dynamics simulation. The details of the electron velocity distributions are calculated without the assumption of electron global thermal equilibrium and Boltzmann distribution. Spontaneous evolutions of the charge imbalance from the initial states with perfect neutrality are given in the simulations. The expansion of outer plasma slows down with the charge imbalance. The influences of plasma size and parameters on the charge imbalance are discussed. The radial profiles of electron temperaturemore » are given for the first time, and the self-similar expansion can still occur even if there is no global thermal equilibrium. The electron disorder induced heating is also found in the simulation.« less

  1. The study on the interdependence of spray characteristics and evaporation history of fuel spray in high temperature air crossflow

    NASA Astrophysics Data System (ADS)

    Zhu, J. Y.; Chin, J. S.

    1986-06-01

    A numerical calculation method is used to predict the variation of the characteristics of fuel spray moving in a high temperature air crossflow, mainly, Sauter mean diameter SMD, droplet size distribution index N of Rosin-Rammler distribution and evaporation percentage changing with downstream distance X from the nozzle. The effect of droplet heat-up period evaporation process and forced convection are taken into full account; thus, the calculation model is a very good approximation to the process of spray evaporation in a practical combustor, such as ramjet, aero-gas turbine, liquid propellant rocket, diesel and other liquid fuel-powered combustion devices. The changes of spray characteristics N, SMD and spray evaporation percentage with air velocity, pressure, temperature, fuel injection velocity, and the initial spray parameters are presented.

  2. Basic biostatistics for post-graduate students

    PubMed Central

    Dakhale, Ganesh N.; Hiware, Sachin K.; Shinde, Abhijit T.; Mahatme, Mohini S.

    2012-01-01

    Statistical methods are important to draw valid conclusions from the obtained data. This article provides background information related to fundamental methods and techniques in biostatistics for the use of postgraduate students. Main focus is given to types of data, measurement of central variations and basic tests, which are useful for analysis of different types of observations. Few parameters like normal distribution, calculation of sample size, level of significance, null hypothesis, indices of variability, and different test are explained in detail by giving suitable examples. Using these guidelines, we are confident enough that postgraduate students will be able to classify distribution of data along with application of proper test. Information is also given regarding various free software programs and websites useful for calculations of statistics. Thus, postgraduate students will be benefitted in both ways whether they opt for academics or for industry. PMID:23087501

  3. Are grain packing and flow turbulence the keys to predicting bedload transport in steep streams? (Invited)

    NASA Astrophysics Data System (ADS)

    Yager, E.; Monsalve Sepulveda, A.; Smith, H. J.; Badoux, A.

    2013-12-01

    Bedload transport rates in steep mountain channels are often over-predicted by orders of magnitude, which has been attributed to a range of processes from grain jamming, roughness drag, changes in fluid turbulence and a limited upstream sediment supply. We hypothesize that such poor predictions occur in part because the grain-scale mechanics (turbulence, particle arrangements) of sediment transport are not well understood or incorporated into simplified reach-averaged calculations. To better quantify how turbulence impacts sediment movement, we measured detailed flow velocities and forces at the onset of motion of a single test grain with a fixed pocket geometry in laboratory flume experiments. Of all measured parameters (e.g. flow velocity, shear stress), the local fluid drag force had the highest statistical correlation with grain motion. Use of flow velocity or shear stress to estimate sediment transport may therefore result in erroneous predictions given their relatively low correlation to the onset of sediment motion. To further understand the role of grain arrangement on bedload transport, we measured in situ grain resisting forces to motion (using a force sensor) for a range of grain sizes and patch classes in the Erlenbach torrent, Switzerland (10% gradient). Such forces varied by over two orders of magnitude for a given grain weight and were statistically greater than those calculated using empirical equations for the friction angle. In addition, when normalized by the grain weight, the resisting forces declined with higher grain protrusion above the surrounding bed sediment. Therefore, resisting forces from grain packing and interlocking are substantial and depend on the amount of grain burial. The onset of motion may be considerably under-estimated when calculated solely from measured grain sizes and friction angles. These packing forces may partly explain why critical Shields stresses are higher in steep channels. Such flow and grain parameters also spatially vary in steep streams because of boulder steps and patches of different grain size distributions. To determine if this spatial variation is important for bedload transport, we incorporated probability density functions of flow turbulence and patch grain size distributions into a simple bedload transport equation. Predicted bedload fluxes were significantly improved when distributions of these parameters, rather than single reach-averaged values, were used.

  4. Design of the value of imaging in enhancing the wellness of your heart (VIEW) trial and the impact of uncertainty on power.

    PubMed

    Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M

    2012-04-01

    Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.

  5. Design of the Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) Trial and the Impact of Uncertainty on Power

    PubMed Central

    Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.

    2014-01-01

    Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998

  6. Coma dust scattering concepts applied to the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Fink, Uwe; Rinaldi, Giovanna

    2015-09-01

    This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.

  7. Particle size distribution and perchlorate levels in settled dust from urban roads, parks, and roofs in Chengdu, China.

    PubMed

    Li, Yiwen; Shen, Yang; Pi, Lu; Hu, Wenli; Chen, Mengqin; Luo, Yan; Li, Zhi; Su, Shijun; Ding, Sanglan; Gan, Zhiwei

    2016-01-01

    A total of 27 settled dust samples were collected from urban roads, parks, and roofs in Chengdu, China to investigate particle size distribution and perchlorate levels in different size fractions. Briefly, fine particle size fractions (<250 μm) were the dominant composition in the settled dust samples, with mean percentages of 80.2%, 69.5%, and 77.2% for the urban roads, roofs, and the parks, respectively. Perchlorate was detected in all of the size-fractionated dust samples, with concentrations ranging from 73.0 to 6160 ng g(-1), and the median perchlorate levels increased with decreasing particle size. The perchlorate level in the finest fraction (<63 μm) was significantly higher than those in the coarser fractions. To our knowledge, this is the first report on perchlorate concentrations in different particle size fractions. The calculated perchlorate loadings revealed that perchlorate was mainly associated with finer particles (<125 μm). An exposure assessment indicated that exposure to perchlorate via settled road dust intake is safe to both children and adults in Chengdu, China. However, due to perchlorate mainly existing in fine particles, there is a potential for perchlorate to transfer into surface water and the atmosphere by runoff and wind erosion or traffic emission, and this could act as an important perchlorate pollution source for the indoor environment, and merits further study.

  8. Using a Novel Optical Sensor to Characterize Methane Ebullition Processes

    NASA Astrophysics Data System (ADS)

    Delwiche, K.; Hemond, H.; Senft-Grupp, S.

    2015-12-01

    We have built a novel bubble size sensor that is rugged, economical to build, and capable of accurately measuring methane bubble sizes in aquatic environments over long deployment periods. Accurate knowledge of methane bubble size is important to calculating atmospheric methane emissions from in-land waters. By routing bubbles past pairs of optical detectors, the sensor accurately measures bubbles sizes for bubbles between 0.01 mL and 1 mL, with slightly reduced accuracy for bubbles from 1 mL to 1.5 mL. The sensor can handle flow rates up to approximately 3 bubbles per second. Optional sensor attachments include a gas collection chamber for methane sampling and volume verification, and a detachable extension funnel to customize the quantity of intercepted bubbles. Additional features include a data-cable running from the deployed sensor to a custom surface buoy, allowing us to download data without disturbing on-going bubble measurements. We have successfully deployed numerous sensors in Upper Mystic Lake at depths down to 18 m, 1 m above the sediment. The resulting data gives us bubble size distributions and the precise timing of bubbling events over a period of several months. In addition to allowing us to characterize typical bubble size distributions, this data allows us to draw important conclusions about temporal variations in bubble sizes, as well as bubble dissolution rates within the water column.

  9. Compaction of Chromite Cumulates applying a Centrifuging Piston-Cylinder

    NASA Astrophysics Data System (ADS)

    Manoochehri, S.; Schmidt, M. W.

    2012-12-01

    Stratiform accumulations of chromite cumulates, such as the UG2 chromitite layer in the Bushveld Complex, is a common feature in most of the large layered mafic intrusions. The time scales and mechanics of gravitationally driven crystal settling and compaction and the feasibility of these processes for the formation of such cumulate layers is investigated through a series of high temperature (1280-1300 °C) centrifuge-assisted experiments at 100-2000 g, 0.4-0.6 GPa. A mixture of natural chromite, with defined grain sizes (means of 5 μm, 13 μm, and 52 μm), and a melt with a composition thought to represent the parental magma of the Bushveld Complex, was first chemically and texturally equilibrated at static conditions and then centrifuged. Centrifugation leads to a single cumulate layer formed at the gravitational bottom of the capsule. This layer was analysed for porosity, mean grain size, size distribution and also travelling distance of chromite crystals. The experimentally observed mechanical settling velocity of chromite grains in a suspension with ~ 24 vol% crystals is calculated to be about half (~ 0.53) of the Stokes settling velocity, consistent with a sedimentation exponent n of 2.35±0.3. The settling leads to a porosity of about 52 % in the chromite layer. Formation times of chromite orthocumulates with initial crystal content in the melt of 1 % and grain sizes of 2 mm are thus around 0.6 m/day. To achieve more compacted chromite piles, centrifugation times and acceleration were increased. Within each experiment the crystal content of the cumulate layer increases downward almost linearly at least in the lower 2/3 of the cumulate pile. Although porosity in the lowermost segment of the chromite layer decreases with increasing effective stress integrated over time, the absolute decrease is smaller than for experiments with olivine (from a previous study). Formation times of a ½ meter single chromite layer with 70 vol% chromite, is calculated to be around 20 years whereas this value is around 0.4 years for olivine cumulates. When considering a natural outcrop of a layered intrusion with multiple layers of about 50 meters height, adcumulate formation time decreases to a few months. With increasing the effective stress integrated over time, applied during centrifugation, crystal size distribution histograms move slightly toward larger grain sizes, but looking at mean grain sizes, a narrow range of changes can be observed. Classic crystal size distribution profiles corrected for real 3D sizes (CSDCorrectin program) of the chromite grains in different experiments illustrate a collection of parallel log-linear trends at larger grain sizes with a very slight overturn at small grain sizes. This is in close agreement with the idealized CSD plots of adcumulus growth.

  10. Visualizing excipient composition and homogeneity of Compound Liquorice Tablets by near-infrared chemical imaging

    NASA Astrophysics Data System (ADS)

    Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang

    2012-02-01

    This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.

  11. Time Dependence of Aerosol Light Scattering Downwind of Forest Fires

    NASA Astrophysics Data System (ADS)

    Kleinman, L. I.; Sedlacek, A. J., III; Wang, J.; Lewis, E. R.; Springston, S. R.; Chand, D.; Shilling, J.; Arnott, W. P.; Freedman, A.; Onasch, T. B.; Fortner, E.; Zhang, Q.; Yokelson, R. J.; Adachi, K.; Buseck, P. R.

    2017-12-01

    In the first phase of BBOP (Biomass Burn Observation Project), a Department of Energy (DOE) sponsored study, wildland fires in the Pacific Northwest were sampled from the G-1 aircraft via sequences of transects that encountered emission whose age (time since emission) ranged from approximately 15 minutes to four hours. Comparisons between transects allowed us to determine the near-field time evolution of trace gases, aerosol particles, and optical properties. The fractional increase in aerosol concentration with plume age was typically less than a third of the fractional increase in light scattering. In some fires the increase in light scattering exceeded a factor of two. Two possible causes for the discrepancy between scattering and aerosol mass are i) the downwind formation of refractory tar balls that are not detected by the AMS and therefore contribute to scattering but not to aerosol mass and ii) changes to the aerosol size distribution. Both possibilities are considered. Our information on tar balls comes from an analysis of TEM grids. A direct determination of size changes is complicated by extremely high aerosol number concentrations that caused coincidence problems for the PCASP and UHSAS probes. We instead construct a set of plausible log normal size distributions and for each member of the set do Mie calculations to determine mass scattering efficiency (MSE), angstrom exponents, and backscatter ratios. Best fit size distributions are selected by comparison with observed data derived from multi-wavelength scattering measurements, an extrapolated FIMS size distribution, and mass measurements from an SP-AMS. MSE at 550 nm varies from a typical near source value of 2-3 to about 4 in aged air.

  12. Partitioning of pyroclasts between ballistic transport and a convective plume: Kīlauea volcano, 19 March 2008

    NASA Astrophysics Data System (ADS)

    Houghton, B. F.; Swanson, D. A.; Biass, S.; Fagents, S. A.; Orr, T. R.

    2017-05-01

    We describe the discrete ballistic and wind-advected products of a small, but exceptionally well-characterized, explosive eruption of wall-rock-derived pyroclasts from Kīlauea volcano on 19 March 2008 and, for the first time, integrate the size distribution of the two subpopulations to reconstruct the true size distribution of a population of pyroclasts as it exited from the vent. Based on thinning and fining relationships, the wind-advected fraction had a mass of 6.1 × 105 kg and a thickness half distance of 110 m, placing it at the bottom end of the magnitude and intensity spectra of pyroclastic falls. The ballistic population was mapped, in the field and by using structure-from-motion techniques, to a diameter of > 10-20 cm over an area of 0.1 km2, with an estimated mass of 1 × 105 kg. Initial ejection velocities of 50-80 m/s were estimated from inversion of isopleths. The total grain size distribution was estimated by using a mass partitioning of 98% of wind-advected material and 2% of ballistics, resulting in median and sorting values of -1.7ϕ and 3.1ϕ. It is markedly broader than those calculated for the products of magmatic explosive eruptions, because the grain size of 19 March 2008 clast population is unrelated to a volcanic fragmentation event and instead was "inherited" from a population of talus clasts that temporary blocked the vent prior to the eruption. Despite a conspicuous near-field presence, the ballistic subpopulation has only a minor influence on the grain size distribution because of its rapid thinning and fining away from source.

  13. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.

    In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10x10 and 40x40 cm{sup 2} field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections,more » (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the ''base line'' for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were {approx}935 ({approx}111 min on a single 2.6 GHz processor) and {approx}200 ({approx}45 min on a single processor) for the 10x10 field size with 50 million histories and 40x40 cm{sup 2} field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.« less

  14. Chemical characterization of size-resolved aerosols in four seasons and hazy days in the megacity Beijing of China.

    PubMed

    Sun, Kang; Liu, Xingang; Gu, Jianwei; Li, Yunpeng; Qu, Yu; An, Junling; Wang, Jingli; Zhang, Yuanhang; Hu, Min; Zhang, Fang

    2015-06-01

    Size-resolved aerosol samples were collected by MOUDI in four seasons in 2007 in Beijing. The PM10 and PM1.8 mass concentrations were 166.0±120.5 and 91.6±69.7 μg/m3, respectively, throughout the measurement, with seasonal variation: nearly two times higher in autumn than in summer and spring. Serious fine particle pollution occurred in winter with the PM1.8/PM10 ratio of 0.63, which was higher than other seasons. The size distribution of PM showed obvious seasonal and diurnal variation, with a smaller fine mode peak in spring and in the daytime. OM (organic matter=1.6×OC (organic carbon)) and SIA (secondary inorganic aerosol) were major components of fine particles, while OM, SIA and Ca2+ were major components in coarse particles. Moreover, secondary components, mainly SOA (secondary organic aerosol) and SIA, accounted for 46%-96% of each size bin in fine particles, which meant that secondary pollution existed all year. Sulfates and nitrates, primarily in the form of (NH4)2SO4, NH4NO3, CaSO4, Na2SO4 and K2SO4, calculated by the model ISORROPIA II, were major components of the solid phase in fine particles. The PM concentration and size distribution were similar in the four seasons on non-haze days, while large differences occurred on haze days, which indicated seasonal variation of PM concentration and size distribution were dominated by haze days. The SIA concentrations and fractions of nearly all size bins were higher on haze days than on non-haze days, which was attributed to heterogeneous aqueous reactions on haze days in the four seasons. Copyright © 2015. Published by Elsevier B.V.

  15. SU-E-T-800: Verification of Acurose XB Dose Calculation Algorithm at Air Cavity-Tissue Interface Using Film Measurement for Small Fields of 6-MV Flattening Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    2015-06-15

    Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm{sup 2} fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements inmore » this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes.« less

  16. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantzis, G; Leventouri, T; Tachibana, H

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less

  17. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    PubMed

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P < 0.001), systolic (126.37 ± 20.25 vs. 119.21 ± 15.81 mmHg, P < 0.001) and diastolic (78.14 ± 14.21 vs. 67.54 ± 11.46 mmHg, P < 0.001) blood pressures. The distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P < 0.001) as well as between patients without retinopathy and those with non-proliferative diabetic retinopathy (NPDR); with larger AOV for smaller vessels in NPDR ( P < 0.001). Controlling for the effect of confounders, patients had a smaller total AOV, larger total SD of AOV, and a more skewed distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  18. SU-E-T-120: Analytic Dose Verification for Patient-Specific Proton Pencil Beam Scanning Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C; Mah, D

    2015-06-15

    Purpose: To independently verify the QA dose of proton pencil beam scanning (PBS) plans using an analytic dose calculation model. Methods: An independent proton dose calculation engine is created using the same commissioning measurements as those employed to build our commercially available treatment planning system (TPS). Each proton PBS plan is exported from the TPS in DICOM format and calculated by this independent dose engine in a standard 40 x 40 x 40 cm water tank. This three-dimensional dose grid is then compared with the QA dose calculated by the commercial TPS, using standard Gamma criterion. A total of 18more » measured pristine Bragg peaks, ranging from 100 to 226 MeV, are used in the model. Intermediate proton energies are interpolated. Similarly, optical properties of the spots are measured in air over 15 cm upstream and downstream, and fitted to a second-order polynomial. Multiple Coulomb scattering in water is approximated analytically using Preston and Kohler formula for faster calculation. The effect of range shifters on spot size is modeled with generalized Highland formula. Note that the above formulation approximates multiple Coulomb scattering in water and we therefore chose not use the full Moliere/Hanson form. Results: Initial examination of 3 patient-specific prostate PBS plans shows that agreement exists between 3D dose distributions calculated by the TPS and the independent proton PBS dose calculation engine. Both calculated dose distributions are compared with actual measurements at three different depths per beam and good agreements are again observed. Conclusion: Results here showed that 3D dose distributions calculated by this independent proton PBS dose engine are in good agreement with both TPS calculations and actual measurements. This tool can potentially be used to reduce the amount of different measurement depths required for patient-specific proton PBS QA.« less

  19. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo‐based dose distributions

    PubMed Central

    Kapanen, Mika K.; Hyödynmaa, Simo J.; Wigren, Tuija K.; Pitkänen, Maunu A.

    2014-01-01

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms — pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB) — implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC‐calculated dose distributions were compared to corresponding AXB‐calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose‐volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3 mm and 2%/2 mm were applied. The AXB‐calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20‐25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type ‘a’ and/or type ‘b’ algorithms for the most recent generation type ‘c’ algorithms, such as the AXB algorithm. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.K‐, 87.55.kd, 87.55.Qr PMID:24710454

  20. Encapsidation of Linear Polyelectrolyte in a Viral Nanocontainer

    NASA Astrophysics Data System (ADS)

    Hu, Yufang

    2005-03-01

    We present the results from a combined experimental and theoretical study on the self-assembly of a model icosahedral virus, Cowpea Chlorotic Mottle Virus (CCMV). The formation of native CCMV capsids is believed to be driven primarily by the electrostatic interactions between the viral RNA and the positively charged capsid interior, as well as by the hydrophobic interactions between capsid protein subunits. To probe these molecular interactions, in vitro self-assembly reactions are carried out using the CCMV capsid protein and a synthetic linear polyelectrolyte, sodium polystyrene sulfonate (NaPSS), which functions as the analog of viral RNA. Under appropriate solutions conditions, NaPSS is encapsidated by the viral capsid. The molecular weight of NaPSS is systematically varied and the resulting average capsid size, size distribution, and particle morphology are measured by transmission electron microscopy. The correlation between capsid size and packaged cargo size, as well as the upper limit of capsid packaging capacity, are characterized. To elucidate the physical role played by the encapsidated polyelectrolyte in determining the preferred size of spherical viruses, we have used a mean-field approach to calculate the free energy of the virus-like particle as a function of chain length (and of the strength of chain/capsid attractive interaction). We find good agreement with our analytical calculations and experimental results.

  1. Analyses of scattering characteristics of chosen anthropogenic aerosols

    NASA Astrophysics Data System (ADS)

    Kaszczuk, Miroslawa; Mierczyk, Zygmunt; Muzal, Michal

    2008-10-01

    In the work, analyses of scattering profile of chosen anthropogenic aerosols for two wavelengths (λ1 = 1064 nm and λ2 = 532 nm) were made. As an example of anthropogenic aerosol three different pyrotechnic mixtures (DM11, M2, M16) were taken. Main parameters of smoke particles were firstly analyzed and well described, taking particle shape and size into special consideration. Shape of particles was analyzed on the basis of SEM pictures, and particle size was measured. Participation of particles in each fixed fraction characterized by range of sizes was analyzed and parameters of smoke particles of characteristic sizes and function describing aerosol size distribution (ASD) were determinated. Analyses of scattering profiles were carried out on the basis of both model of scattering on spherical and nonspherical particles. In the case of spherical particles Rayleigh-Mie model was used and for nonspherical particles analyses firstly model of spheroids was used, and then Rayleigh-Mie one. For each characteristic particle one calculated value of four parameters (effective scattering cross section σSCA, effective backscattering cross section σBSCA, scattering efficiency QSCA, backscattering efficiency QBSCA) and value of backscattering coefficient β for whole particles population. Obtained results were compared with the same parameters calculated for natural aerosol (cirrus cloud).

  2. Remote sensing of PM2.5 from ground-based optical measurements

    NASA Astrophysics Data System (ADS)

    Li, S.; Joseph, E.; Min, Q.

    2014-12-01

    Remote sensing of particulate matter concentration with aerodynamic diameter smaller than 2.5 um(PM2.5) by using ground-based optical measurements of aerosols is investigated based on 6 years of hourly average measurements of aerosol optical properties, PM2.5, ceilometer backscatter coefficients and meteorological factors from Howard University Beltsville Campus facility (HUBC). The accuracy of quantitative retrieval of PM2.5 using aerosol optical depth (AOD) is limited due to changes in aerosol size distribution and vertical distribution. In this study, ceilometer backscatter coefficients are used to provide vertical information of aerosol. It is found that the PM2.5-AOD ratio can vary largely for different aerosol vertical distributions. The ratio is also sensitive to mode parameters of bimodal lognormal aerosol size distribution when the geometric mean radius for the fine mode is small. Using two Angstrom exponents calculated at three wavelengths of 415, 500, 860nm are found better representing aerosol size distributions than only using one Angstrom exponent. A regression model is proposed to assess the impacts of different factors on the retrieval of PM2.5. Compared to a simple linear regression model, the new model combining AOD and ceilometer backscatter can prominently improve the fitting of PM2.5. The contribution of further introducing Angstrom coefficients is apparent. Using combined measurements of AOD, ceilometer backscatter, Angstrom coefficients and meteorological parameters in the regression model can get a correlation coefficient of 0.79 between fitted and expected PM2.5.

  3. Energy Characteristics of Small Metal Clusters Containing Vacancies

    NASA Astrophysics Data System (ADS)

    Reva, V. I.; Pogosov, V. V.

    2018-02-01

    Self-consistent calculations of spatial distributions of electrons, potentials, and energies of dissociation, cohesion, vacancy formation, and electron attachment, as well as the ionization potential of solid Al N , Na N clusters ( N ≥ 254), and clusters containing a vacancy ( N ≥ 12) have been performed using a model of stable jellium. The contribution of a monovacancy to the energy of the cluster, the size dependences of the characteristics, and their asymptotic forms have been considered. The calculations have been performed on the SKIT-3 cluster at the Glushkov Institute of Cybernetics, National Academy of Sciences of Ukraine (Rpeak = 7.4 Tflops).

  4. Massive parallel 3D PIC simulation of negative ion extraction

    NASA Astrophysics Data System (ADS)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  5. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  6. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  7. Grain size indicators of sedimentary coupling between hillslopes and channels in a dryland basin

    NASA Astrophysics Data System (ADS)

    Hollings, Rory; Michealides, Katerina; Bliss Singer, Michael

    2017-04-01

    In dryland landscapes, heterogeneous and short-lived rainstorms generate runoff on slopes and streamflow in channels, which drive sediment movement from hillslope surfaces to channels and the transport of bed material sediment within channels. Long-term topographic evolution of drainage basins is partly determined by the relative balance of hillslope sediment supply to channels and the evacuation of channel sediment. However, it is not clear whether supply or evacuation is dominant over longer timescales (>>100 y) within dryland basins. One important indicator of local cumulative sediment transport is grain size (GS). On dryland hillslopes, grain size is governed over long timescales by weathering, but on short time scales (events to decades), is controlled by event-driven transport of the debris mantle. In the channel, GS reflects the input of hillslope sediment and the selective transport of particles along the bed. It is currently unknown how these two processes are expressed systematically within GS distributions on slopes and in channels within drylands, but this information could be useful to explain the history of the relative balance between hillslope sediment supply to channels and net sediment transport in the channel. We investigate this problem by combining field measurements of surface sediment grain size distributions in channels and on hillslopes with 1m LiDAR topography, >60 years of rainfall and channel discharge data from the Walnut Gulch Experimental Watershed (WGEW) in Arizona, and simple calculations of grain-sized based local stress distributions for various rainfall and discharge events. Hydrological scenarios of overland flow on hillslopes and channel flow conditions were derived from distributions of historic data at WGEW and were selected to reflect the wide range of storm intensities and durations, and channel discharges. 1) We used three quartiles of the entire distribution of measured discharge values for 80 locations throughout the channel network to represent low, medium and high flows. 2) For rainfall we used three quartiles of the entire distribution of measured rainfall intensity and duration from 85 rain gauges spanning the basin, to derive low, medium and high rainfall durations. We then calculated the corresponding rainfall intensities based on four intensity-duration curves that were characteristic of different parts of the phase space of the measured data-points. 3) The derived rainfall intensities and durations were converted into hillslope overland flow using Coup2D (a hillslope rainfall-runoff model) for 44 hillslopes within WGEW for which we have GS and topographic data. We employ the median grain size (D50) to compare stress metrics on hillslopes and in channel for each location. Typically, low-order streams experience greater influxes of hillslope derived sediment than is evacuated by the channel. However, the main channel stem is characterised by sediment removal in most scenarios including low discharge, long duration rainfall, suggesting most hillslope supplied sediment is balanced by channel evacuation. Near tributary junctions, and close to the mouth of the basin there are fluctuations in net balance of sediment transport from evacuation- to supply-dominance for different scenarios. These fluctuations could influence channel bed GS distribution and longitudinal profile development.

  8. Food photography II: use of food photographs for estimating portion size and the nutrient content of meals.

    PubMed

    Nelson, M; Atkinson, M; Darbyshire, S

    1996-07-01

    The aim of the present study was to determine the errors in the conceptualization of portion size using photographs. Male and female volunteers aged 18-90 years (n 136) from a wide variety of social and occupational backgrounds completed 602 assessments of portion size in relation to food photographs. Subjects served themselves between four and six foods at one meal (breakfast, lunch or dinner). Portion sizes were weighed by the investigators at the time of serving, and any waste was weighed at the end of the meal. Within 5 min of the end of the meal, subjects were shown photographs depicting each of the foods just consumed. For each food there were eight photographs showing portion sizes in equal increments from the 5th to the 95th centile of the distribution of portion weights observed in The Dietary and Nutritional Survey of British Adults (Gregory et al. 1990). Subjects were asked to indicate on a visual analogue scale the size of the portion consumed in relation to the eight photographs. The nutrient contents of meals were estimated from food composition tables. There were large variations in the estimation of portion sizes from photographs. Butter and margarine portion sizes tended to be substantially overestimated. In general, small portion sizes tended to be overestimated, and large portion sizes underestimated. Older subjects overestimated portion size more often than younger subjects. Excluding butter and margarine, the nutrient content of meals based on estimated portion sizes was on average within +/- 7% of the nutrient content based on the amounts consumed, except for vitamin C (21% overestimate), and for subjects over 65 years (15-20% overestimate for energy and fat). In subjects whose BMI was less than 25 kg/m2, the energy and fat contents of meals calculated from food composition tables and based on estimated portion size (excluding butter and margarine) were 5-10% greater than the nutrient content calculated using actual portion size, but for those with BMI 30 kg/m2 or over, the calculated energy and fat contents were underestimated by 2-5%. The correlation of the nutrient content of meals based on actual or estimated portion sizes ranged from 0-84 to 0-96. For energy and eight nutrients, between 69 and 89% subjects were correctly classified into thirds of the distribution of intake using estimated portion size compared with intakes based on actual portion sizes. When 'average' portion sizes (the average weight of each of the foods which the subjects had served themselves) were used in place of the estimates based on photographs, the number of subjects correctly classified fell to between 60 and 79%. We report for the first time the error associated with conceptualization and the nutrient content of meals when using photographs to estimate food portion size. We conclude that photographs depicting a range of portion sizes are a useful aid to the estimation of portion size. Misclassification of subjects according to their nutrient intake from one meal is reduced when photographs are used to estimate portion size, compared with the use of average portions. Age, sex, BMI and portion size are all potentially important confounders when estimating food consumption or nutrient intake using photographs.

  9. Burst nucleation by hot injection for size controlled synthesis of ε-cobalt nanoparticles.

    PubMed

    Zacharaki, Eirini; Kalyva, Maria; Fjellvåg, Helmer; Sjåstad, Anja Olafsen

    2016-01-01

    Reproducible growth of narrow size distributed ε-Co nanoparticles with a specific size requires full understanding and identification of the role of essential synthesis parameters for the applied synthesis method. For the hot injection methodology, a significant discrepancy with respect to obtained sizes and applied reaction conditions is reported. Currently, a systematic investigation controlling key synthesis parameters as injection-temperature and time, metal to surfactant ratio and reaction holding time in terms of their impact on mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameter using dichlorobenzene (DCB), Co2(CO)8 and oleic acid (OA) as the reactant matrix is lacking. A series of solution-based ε-Co nanoparticles were synthesized using the hot injection method. Suspensions and obtained particles were analyzed by DLS, ICP-OES, (synchrotron)XRD and TEM. Rietveld refinements were used for structural analysis. Mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameters were calculated with basis in measurements of 250-500 particles for each synthesis. 95 % bias corrected confidence intervals using bootstrapping were calculated for syntheses with three or four replicas. ε-Co NPs in the size range ~4-10 nm with a narrow size distribution are obtained via the hot injection method, using OA as the sole surfactant. Typically the synthesis yield is ~75 %, and the particles form stable colloidal solutions when redispersed in hexane. Reproducibility of the adopted synthesis procedure on replicate syntheses was confirmed. We describe in detail the effects of essential synthesis parameters, such as injection-temperature and time, metal to surfactant ratio and reaction holding time in terms of their impact on mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameter. The described synthesis procedure towards ε-Co nanoparticles (NPs) is concluded to be robust when controlling key synthesis parameters, giving targeted particle diameters with a narrow size distribution. We have identified two major synthesis parameters which control particle size, i.e., the metal to surfactant molar ratio and the injection temperature of the hot OA-DCB solution into which the cobalt precursor is injected. By increasing the metal to surfactant molar ratio, the mean particle diameter of the ε-Co NPs has been found to increase. Furthermore, an increase in the injection temperature of the hot OA-DCB solution into which the cobalt precursor is injected, results in a decrease in the mean particle diameter of the ε-Co NPs, when the metal to surfactant molar ratio [Formula: see text] is fixed at ~12.9.

  10. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Classification of biological cells using a sound wave based flow cytometer

    NASA Astrophysics Data System (ADS)

    Strohm, Eric M.; Gnyawali, Vaskar; Van De Vondervoort, Mia; Daghighi, Yasaman; Tsai, Scott S. H.; Kolios, Michael C.

    2016-03-01

    A flow cytometer that uses sound waves to determine the size of biological cells is presented. In this system, a microfluidic device made of polydimethylsiloxane (PDMS) was developed to hydrodynamically flow focus cells in a single file through a target area. Integrated into the microfluidic device was an ultrasound transducer with a 375 MHz center frequency, aligned opposite the transducer was a pulsed 532 nm laser focused into the device by a 10x objective. Each passing cell was insonfied with a high frequency ultrasound pulse, and irradiated with the laser. The resulting ultrasound and photoacoustic waves from each cell were analyzed using signal processing methods, where features in the power spectra were compared to theoretical models to calculate the cell size. Two cell lines with different size distributions were used to test the system: acute myeloid leukemia cells (AML) and melanoma cells. Over 200 cells were measured using this system. The average calculated diameter of the AML cells was 10.4 +/- 2.5 μm using ultrasound, and 11.4 +/- 2.3 μm using photoacoustics. The average diameter of the melanoma cells was 16.2 +/- 2.9 μm using ultrasound, and 18.9 +/- 3.5 μm using photoacoustics. The cell sizes calculated using ultrasound and photoacoustic methods agreed with measurements using a Coulter Counter, where the AML cells were 9.8 +/- 1.8 μm and the melanoma cells were 16.0 +/- 2.5 μm. These results demonstrate a high speed method of assessing cell size using sound waves, which is an alternative method to traditional flow cytometry techniques.

  12. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Evaluation of suspended sediment concentrations in a hydropower reservoir by using a Laser In-Situ Scattering and Transmissometry instrument

    NASA Astrophysics Data System (ADS)

    Lizano, Laura; Haun, Stefan

    2014-05-01

    Sediment transported by rivers start to settle when they enter a reservoir due to reduced flow velocities and turbulences. Reservoir sedimentation is a common problem today and eliminates about 1% of the worldwide existing storage capacity annually. However, depending on the climate conditions and the geology in the catchment area this value can increase up to 5% and higher. Among the results of reservoir deposition is the loss of the storage capacity, a loss of flood control benefits or even blockage of intakes due to sediment accumulation in front of the structure. As a consequence, management tasks have to be planned and conducted to guarantee a safe and economical reservoir operation. A major part of the sediment particles entering the reservoir is transported as suspended sediment load. Hence, accurate knowledge of the transport processes of these particles in the reservoir is advantageous for planning and predicting a sustainable reservoir operation. Of special interest is the spatial distribution of the grain sizes in the reservoir, for example, which grain sizes can be expected to enter the waterway and have a major contribution in turbine abrasion. The suspended sediment concentrations and the grain size distribution along the Sandillal reservoir in Costa Rica were measured in this study by using a Laser In-Situ Scattering and Transmissometry instrument (LISST-SL). The instrument measures sediment concentrations as well as the grain size distributions instantaneously (32 grain sizes in the range between 2.1 and 350 μm) with a frequency of 0.5 Hertz. The measurements were applied at different pre-specified transects along the reservoir, in order to assess the spatial distribution of the suspended sediment concentrations. The measurements were performed in vertical lines, at different depths and for a period of 60 seconds. Additionally, the mean grain size distribution was calculated from the data for each measured point. The measurements showed that the suspended sediment concentrations were low during the field campaign. However, they gave insight of the spatial distribution of the suspended sediments along the reservoir and at different depths. The measurements in front of the intake were especially interesting, since the concentration and the sizes of the particles, which will furthermore enter the intake, could be evaluated.

  14. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  15. Probing Sizes and Shapes of Nobelium Isotopes by Laser Spectroscopy

    NASA Astrophysics Data System (ADS)

    Raeder, S.; Ackermann, D.; Backe, H.; Beerwerth, R.; Berengut, J. C.; Block, M.; Borschevsky, A.; Cheal, B.; Chhetri, P.; Düllmann, Ch. E.; Dzuba, V. A.; Eliav, E.; Even, J.; Ferrer, R.; Flambaum, V. V.; Fritzsche, S.; Giacoppo, F.; Götz, S.; Heßberger, F. P.; Huyse, M.; Kaldor, U.; Kaleja, O.; Khuyagbaatar, J.; Kunz, P.; Laatiaoui, M.; Lautenschläger, F.; Lauth, W.; Mistry, A. K.; Minaya Ramirez, E.; Nazarewicz, W.; Porsev, S. G.; Safronova, M. S.; Safronova, U. I.; Schuetrumpf, B.; Van Duppen, P.; Walther, T.; Wraith, C.; Yakushev, A.

    2018-06-01

    Until recently, ground-state nuclear moments of the heaviest nuclei could only be inferred from nuclear spectroscopy, where model assumptions are required. Laser spectroscopy in combination with modern atomic structure calculations is now able to probe these moments directly, in a comprehensive and nuclear-model-independent way. Here we report on unique access to the differential mean-square charge radii of No 252 ,253 ,254 , and therefore to changes in nuclear size and shape. State-of-the-art nuclear density functional calculations describe well the changes in nuclear charge radii in the region of the heavy actinides, indicating an appreciable central depression in the deformed proton density distribution in No,254252 isotopes. Finally, the hyperfine splitting of No 253 was evaluated, enabling a complementary measure of its (quadrupole) deformation, as well as an insight into the neutron single-particle wave function via the nuclear spin and magnetic moment.

  16. IR-Improved DGLAP-CS Theory

    DOE PAGES

    Ward, B. F. L.

    2008-01-01

    We show that it is possible to improve the infrared aspects of the standard treatment of the DGLAP-CS evolution theory to take into account a large class of higher-order corrections that significantly improve the precision of the theory for any given level of fixed-order calculation of its respective kernels. We illustrate the size of the effects we resum using the moments of the parton distributions.

  17. Body size and the division of niche space: food and predation differentially shape the distribution of Serengeti grazers.

    PubMed

    Hopcraft, J Grant C; Anderson, T Michael; Pérez-Vila, Saleta; Mayemba, Emilian; Olff, Han

    2012-01-01

    1. Theory predicts that small grazers are regulated by the digestive quality of grass, while large grazers extract sufficient nutrients from low-quality forage and are regulated by its abundance instead. In addition, predation potentially affects populations of small grazers more than large grazers, because predators have difficulty capturing and handling large prey. 2. We analyse the spatial distribution of five grazer species of different body size in relation to gradients of food availability and predation risk. Specifically, we investigate how the quality of grass, the abundance of grass biomass and the associated risks of predation affect the habitat use of small, intermediate and large savanna grazers at a landscape level. 3. Resource selection functions of five mammalian grazer species surveyed over a 21-year period in Serengeti are calculated using logistic regressions. Variables included in the analyses are grass nitrogen, rainfall, topographic wetness index, woody cover, drainage lines, landscape curvature, water and human habitation. Structural equation modelling (SEM) is used to aggregate predictor variables into 'composites' representing food quality, food abundance and predation risk. Subsequently, SEM is used to investigate species' habitat use, defined as their recurrence in 5 × 5 km cells across repeated censuses. 4. The distribution of small grazers is constrained by predation and food quality, whereas the distribution of large grazers is relatively unconstrained. The distribution of the largest grazer (African buffalo) is primarily associated with forage abundance but not predation risk, while the distributions of the smallest grazers (Thomson's gazelle and Grant's gazelle) are associated with high grass quality and negatively with the risk of predation. The distributions of intermediate sized grazers (Coke's hartebeest and topi) suggest they optimize access to grass biomass of sufficient quality in relatively predator-safe areas. 5. The results illustrate how top-down (vegetation-mediated predation risk) and bottom-up factors (biomass and nutrient content of vegetation) predictably contribute to the division of niche space for herbivores that vary in body size. Furthermore, diverse grazing assemblages are composed of herbivores of many body sizes (rather than similar body sizes), because these herbivores best exploit the resources of different habitat types. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  18. Documentation of Measles Elimination in Iran: Evidences from 2012 to 2014.

    PubMed

    Karami, Manoochehr; Zahraei, Seyed Mohsen; Sabouri, Azam; Soltanshahi, Rambod; Biderafsh, Azam; Piri, Naser; Lee, Jong-Koo

    2017-08-05

    Documentation of achieving the goal of measles elimination to justify to international organizations including the WHO is a priority for public health authorities. This study aimed to address the status of Iran in the achievement of the measles elimination goal from 2012-2014. A descriptive study METHODS: Data on the measles outbreaks were extracted from the national notifiable measles surveillance system in Iran from 2012 to 2014. The required documents regarding the achievement of measles elimination, including Effective Reproduction Number (R) and the distribution of outbreak size, was addressed. The R was calculated using the proportion of imported cases as 1 - P, where P is equal to the proportion of cases that were imported. The distribution of the measles outbreaks size was described using descriptive statistics to show their magnitudes. The proportion of large outbreaks with more than 10 cases was considered as a proxy of the R value. The total number of measles cases was 232 cases (including 186 outbreak related cases) in 2012 and 142 cases in 2014, including108 outbreak related cases. The distribution of the measles outbreak size of occurred outbreaks from that period indicated that there were 37 outbreaks with three or more than three cases. The R value in 2012 was 0.87 and the corresponding value for 2014 was 0.76. According to the magnitude of effective reproduction number and distribution of outbreaks' size, measles has been eliminated in Iran. However, it is necessary to consider the potential endemic activity of measles because of no authorized immigration.

  19. Generation of a focused hollow beam by an 2π-phase plate and its application in atom or molecule optics

    NASA Astrophysics Data System (ADS)

    Xia, Yong; Yin, Jianping

    2005-03-01

    We propose a new scheme to generate a focusing hollow beam (FHB) by use of an azimuthally distributed 2π-phase plate and a convergent thin lens. From the Fresnel diffraction theory, we calculate the intensity distributions of the FHB in free propagation space and study the relationship between the waist w0 of the incident Gaussian beam (or the focal length f of the lens) and the dark spot size (or the beam radius) at the focal point and the relationship between the maximum radial intensity of the FHB and the dark spot size (or the beam radius) at the focal point, respectively. Our study shows that the FHB can be used to cool and trap neutral atoms by intensity-gradient-induced Sisyphus cooling due to an extremely high intensity gradient of the FHB itself near the focal point, or to guide and focus a cold molecular beam. We also calculate the optical potential of the blue-detuned FHB for 85Rb atoms and find that in the focal plane, the smaller the dark spot size of the FHB is, the higher the optical potential is, and the greater the corresponding optimal detuning δ is; these qualities are beneficial to an atomic lens not only because it is profitable to obtain an atomic lens with a higher resolution, but also because it is helpful to reduce the spontaneous photon-scattering effect of atoms in the FHB.

  20. Granule fraction inhomogeneity of calcium carbonate/sorbitol in roller compacted granules.

    PubMed

    Bacher, C; Olsen, P M; Bertelsen, P; Sonnergaard, J M

    2008-02-12

    The granule fraction inhomogeneity of roller compacted granules was examined on mixtures of three different morphologic forms of calcium carbonate and three particle sizes of sorbitol. The granule fraction inhomogeneity was determined by the distribution of the calcium carbonate in each of the 10 size fractions between 0 and 2000 microm and by calculating the demixing potential. Significant inhomogeneous occurrence of calcium carbonate in the size fractions was demonstrated, depending mostly on the particles sizes of sorbitol but also on the morphological forms of calcium carbonate. The heterogeneous distribution of calcium carbonate was related to the decrease in compactibility of roller compacted granules in comparison to the ungranulated materials. This phenomenon was explained by a mechanism where fracturing of the ribbon during granulation occurred at the weakest interparticulate bonds (the calcium carbonate: calcium carbonate bonds) and consequently exposed the weakest areas of bond formation on the surface of the granules. Accordingly, the non-uniform allocation of the interparticulate attractive forces in a tablet would cause a lowering of the compactibility. Furthermore, the ability of the powder to agglomerate in the roller compactor was demonstrated to be related to the ability of the powder to be compacted into a tablet, thus the most compactable calcium carbonate and the smallest sized sorbitol improved the homogeneity by decreasing the demixing potential.

  1. Temporal and spatial stability of red-tailed hawk territories in the Luquillo Experimental Forest, Puerto Rico

    USGS Publications Warehouse

    Boal, C.W.; Snyder, H.A.; Bibles, Brent D.; Estabrook, T.S.

    2003-01-01

    We mapped Red-tailed Hawk (Buteo jamaicensis) territories in the Luquillo Experimental Forest (LEF) of Puerto Rico in 1998. We combined our 1998 data with that collected during previous studies of Red-tailed Hawks in the LEF to examine population numbers and spatial stability of territorial boundaries over a 26-yr period. We also investigated potential relationships between Red-tailed Hawk territory sizes and topographic and climatic factors. Mean size of 16 defended territories during 1998 was 124.3 ?? 12.0 ha, which was not significantly different from our calculations of mean territory sizes derived from data collected in 1974 and 1984. Aspect and slope influenced territory size with the smallest territories having high slope and easterly aspects. Territory size was small compared to that reported for other parts of the species' range. In addition, there was remarkably little temporal change in the spatial distribution, area, and boundaries of Red-tailed Hawk territories among the study periods. Further, there was substantial boundary overlap (21-27%) between defended territories among the different study periods. The temporal stability of the spatial distribution of Red-tailed Hawk territories in the study area leads us to believe the area might be at or near saturation.

  2. Theoretical investigation on nanoparticle concentrations in optoelectrofluidic chip based on diffusion, convection, and migration

    NASA Astrophysics Data System (ADS)

    Hu, Sheng; Lv, Jiangtao; Si, Guangyuan

    2016-10-01

    A numerical model and simulation relative to an optoelectrofluidic chip has been presented in this article. Both dielectrophoretic and electroosmotic force attracting the nano-sized particles could be studied by the diffusion, convection, and migration equations. For the nano-sized particles, the protein with radius 3.6 nm is considered as the objective particle. The electroosmosis dependent upon applied frequency is calculated, which range 102 Hz from 108 Hz, and provides the much stronger force to enrich proteins than dielectrophoresis (DEP). Meanwhile, the induced light pattern size significantly affecting the concentration distribution is simulated. In this end, the concentration curve has verified that the optoelectrofluidic chip can be capable of manipulating and assembling the suspended submicron particles.

  3. Combustion of PTFE: The Effects of Gravity and Pigmentation on Ultrafine Particle Generation

    NASA Technical Reports Server (NTRS)

    McKinnon, J. Thomas; Srivastava, Rajiv; Todd, Paul

    1997-01-01

    Ultrafine particles generated during polymer thermodegradation are a major health hazard, owing to their unique pathway of processing in the lung. This hazard in manned spacecraft is poorly understood, because the particulate products of polymer thermodegradation are generated under low gravity conditions. Particulate generated from the degradation of PolyTetraFluoroEthylene (PTFE), insulation coating for 20 AWG copper wire (representative of spacecraft application) under intense ohmic heating were studied in terrestrial gravity and microgravity. Microgravity tests were done in a 1.2-second drop tower at the Colorado School of Mines (CSM). Thermophoretic sampling was used for particulate collection. Transmission Electron Microscopy (TEM) and Scanning Transmission Electron Microscopy (STEM) were used to examine the smoke particulates. Image software was used to calculate particle size distribution. In addition to gravity, the color of PTFE insulation has an overwhelming effect on size, shape and morphology of the particulate. Nanometer-sized primary particles were found in all cases, and aggregation and size distribution was dependent on both color and gravity; higher aggregation occurred in low gravity. Particulates from white, black, red and yellow colored PTFE insulations were studied. Elemental analysis of the particulates shows the presence of inorganic pigments.

  4. Ensemble modeling of very small ZnO nanoparticles.

    PubMed

    Niederdraenk, Franziska; Seufert, Knud; Stahl, Andreas; Bhalerao-Panajkar, Rohini S; Marathe, Sonali; Kulkarni, Sulabha K; Neder, Reinhard B; Kumpf, Christian

    2011-01-14

    The detailed structural characterization of nanoparticles is a very important issue since it enables a precise understanding of their electronic, optical and magnetic properties. Here we introduce a new method for modeling the structure of very small particles by means of powder X-ray diffraction. Using thioglycerol-capped ZnO nanoparticles with a diameter of less than 3 nm as an example we demonstrate that our ensemble modeling method is superior to standard XRD methods like, e.g., Rietveld refinement. Besides fundamental properties (size, anisotropic shape and atomic structure) more sophisticated properties like imperfections in the lattice, a size distribution as well as strain and relaxation effects in the particles and-in particular-at their surface (surface relaxation effects) can be obtained. Ensemble properties, i.e., distributions of the particle size and other properties, can also be investigated which makes this method superior to imaging techniques like (high resolution) transmission electron microscopy or atomic force microscopy, in particular for very small nanoparticles. For the particles under study an excellent agreement of calculated and experimental X-ray diffraction patterns could be obtained with an ensemble of anisotropic polyhedral particles of three dominant sizes, wurtzite structure and a significant relaxation of Zn atoms close to the surface.

  5. Stereotactic, Single-Dose Irradiation of Lung Tumors: A Comparison of Absolute Dose and Dose Distribution Between Pencil Beam and Monte Carlo Algorithms Based on Actual Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Huixiao; Lohr, Frank; Fritz, Peter

    2010-11-01

    Purpose: Dose calculation based on pencil beam (PB) algorithms has its shortcomings predicting dose in tissue heterogeneities. The aim of this study was to compare dose distributions of clinically applied non-intensity-modulated radiotherapy 15-MV plans for stereotactic body radiotherapy between voxel Monte Carlo (XVMC) calculation and PB calculation for lung lesions. Methods and Materials: To validate XVMC, one treatment plan was verified in an inhomogeneous thorax phantom with EDR2 film (Eastman Kodak, Rochester, NY). Both measured and calculated (PB and XVMC) dose distributions were compared regarding profiles and isodoses. Then, 35 lung plans originally created for clinical treatment by PB calculationmore » with the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) were recalculated by XVMC (investigational implementation in PrecisePLAN [Elekta AB, Stockholm, Sweden]). Clinically relevant dose-volume parameters for target and lung tissue were compared and analyzed statistically. Results: The XVMC calculation agreed well with film measurements (<1% difference in lateral profile), whereas the deviation between PB calculation and film measurements was up to +15%. On analysis of 35 clinical cases, the mean dose, minimal dose and coverage dose value for 95% volume of gross tumor volume were 1.14 {+-} 1.72 Gy, 1.68 {+-} 1.47 Gy, and 1.24 {+-} 1.04 Gy lower by XVMC compared with PB, respectively (prescription dose, 30 Gy). The volume covered by the 9 Gy isodose of lung was 2.73% {+-} 3.12% higher when calculated by XVMC compared with PB. The largest differences were observed for small lesions circumferentially encompassed by lung tissue. Conclusions: Pencil beam dose calculation overestimates dose to the tumor and underestimates lung volumes exposed to a given dose consistently for 15-MV photons. The degree of difference between XVMC and PB is tumor size and location dependent. Therefore XVMC calculation is helpful to further optimize treatment planning.« less

  6. Thermoluminescent dosimetry in electron beams: energy dependence.

    PubMed

    Robar, V; Zankowski, C; Olivares Pla, M; Podgorsak, E B

    1996-05-01

    The response of thermoluminescent dosimeters to electron irradiations depends on the radiation dose, mean electron energy at the position of the dosimeter in phantom, and the size of the dosimeter. In this paper the semi-empirical expression proposed by Holt et al. [Phys. Med. Biol. 20, 559-570 (1975)] is combined with the calculated electron dose fraction to determine the thermoluminescent dosimetry (TLD) response as a function of the mean electron energy and the dosimeter size. The electron and photon dose fractions, defined as the relative contributions of electrons and bremsstrahlung photons to the total dose for a clinical electron beam, are calculated with Monte Carlo techniques using EGS4. Agreement between the calculated and measured TLD response is very good. We show that the considerable reduction in TLD response per unit dose at low electron energies, i.e., at large depths in phantom, is offset by an ever-increasing relative contribution of bremsstrahlung photons to the total dose of clinical electron beams. This renders the TLD sufficiently reliable for dose measurements over the entire electron depth dose distribution despite the dependence of the TLD response on electron beam energy.

  7. Evaluation of hydraulic conductivities calculated from multi-port permeameter measurements

    USGS Publications Warehouse

    Wolf, Steven H.; Celia, Michael A.; Hess, Kathryn M.

    1991-01-01

    A multiport permeameter was developed for use in estimating hydraulic conductivity over intact sections of aquifer core using the core liner as the permeameter body. Six cores obtained from one borehole through the upper 9 m of a stratified glacial-outwash aquifer were used to evaluate the reliability of the permeameter. Radiographs of the cores were used to assess core integrity and to locate 5- to 10-cm sections of similar grain size for estimation of hydraulic conductivity. After extensive testing of the permeameter, hydraulic conductivities were determined for 83 sections of the six cores. Other measurement techniques included permeameter measurements on repacked sections of core, estimates based on grain-size analyses, and estimates based on borehole flowmeter measurements. Permeameter measurements of 33 sections of core that had been extruded, homogenized, and repacked did not differ significantly from the original measurements. Hydraulic conductivities estimated from grain-size distributions were slightly higher than those calculated from permeameter measurements; the significance of the difference depended on the estimating equation used. Hydraulic conductivities calculated from field measurements, using a borehole flowmeter in the borehole from which the cores were extracted, were significantly higher than those calculated from laboratory measurements and more closely agreed with independent estimates of hydraulic conductivity based on tracer movement near the borehole. This indicates that hydraulic conductivities based on laboratory measurements of core samples may underestimate actual field hydraulic conductivities in this type of stratified glacial-outwash aquifer.

  8. Assessing the role of detrital zircon sorting on provenance interpretations in an ancient fluvial system using paleohydraulics - Permian Cutler Group, Paradox Basin, Utah and Colorado

    NASA Astrophysics Data System (ADS)

    Findlay, C. P., III; Ewing, R. C.; Perez, N. D.

    2017-12-01

    Detrital zircon age signatures used in provenance studies are assumed to be representative of entire catchments from which the sediment was derived, but the extent to which hydraulic sorting can bias provenance interpretations is poorly constrained. Sediment and mineral sorting occurs with changes in hydraulic conditions driven by both allogenic and autogenic processes. Zircon is sorted from less dense minerals due to the difference in density, and any age dependence on zircon size could potentially bias provenance interpretations. In this study, a coupled paleohydraulic and geochemical provenance approach is used to identify changes in paleohydraulic conditions and relate them to spatial variations in provenance signatures from samples collected along an approximately time-correlative source-to-sink pathway in the Permian Cutler Group of the Paradox Basin. Samples proximal to the uplift have a paleoflow direction to the southwest. In the medial basin, paleocurrent direction indicates salt movement caused fluvial pathways divert to the north and northwest on the flanks of anticlines. Channel depth, flow velocity, and discharge calculations were derived from field measurements of grain size and dune and bar cross-stratification indicate that competency of the fluvial system decreased from proximal to the medial basin by up to a factor of 12. Based upon the paleohydraulic calculations, zircon size fractionation would occur along the transect such that the larger zircons are removed from the system prior to reaching the medial basin. Analysis of the size and age distribution of zircons from the proximal and distal fluvial system of the Cutler Group tests if this hydraulic sorting affects the expected Uncompahgre Uplift age distribution.

  9. Ablation experiment and threshold calculation of titanium alloy irradiated by ultra-fast pulse laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Buxiang; Jiang, Gedong; Wang, Wenjun, E-mail: wenjunwang@mail.xjtu.edu.cn

    The interaction between an ultra-fast pulse laser and a material's surface has become a research hotspot in recent years. Micromachining of titanium alloy with an ultra-fast pulse laser is a very important research direction, and it has very important theoretical significance and application value in investigating the ablation threshold of titanium alloy irradiated by ultra-fast pulse lasers. Irradiated by a picosecond pulse laser with wavelengths of 1064 nm and 532 nm, the surface morphology and feature sizes, including ablation crater width (i.e. diameter), ablation depth, ablation area, ablation volume, single pulse ablation rate, and so forth, of the titanium alloymore » were studied, and their ablation distributions were obtained. The experimental results show that titanium alloy irradiated by a picosecond pulse infrared laser with a 1064 nm wavelength has better ablation morphology than that of the green picosecond pulse laser with a 532 nm wavelength. The feature sizes are approximately linearly dependent on the laser pulse energy density at low energy density and the monotonic increase in laser pulse energy density. With the increase in energy density, the ablation feature sizes are increased. The rate of increase in the feature sizes slows down gradually once the energy density reaches a certain value, and gradually saturated trends occur at a relatively high energy density. Based on the linear relation between the laser pulse energy density and the crater area of the titanium alloy surface, and the Gaussian distribution of the laser intensity on the cross section, the ablation threshold of titanium alloy irradiated by an ultra-fast pulse laser was calculated to be about 0.109 J/cm{sup 2}.« less

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. Snow grain size and shape distributions in northern Canada

    NASA Astrophysics Data System (ADS)

    Langlois, A.; Royer, A.; Montpetit, B.; Roy, A.

    2016-12-01

    Pioneer snow work in the 1970s and 1980s proposed new approaches to retrieve snow depth and water equivalent from space using passive microwave brightness temperatures. Numerous research work have led to the realization that microwave approaches depend strongly on snow grain morphology (size and shape), which was poorly parameterized since recently, leading to strong biases in the retrieval calculations. Related uncertainties from space retrievals and the development of complex thermodynamic multilayer snow and emission models motivated several research works on the development of new approaches to quantify snow grain metrics given the lack of field measurements arising from the sampling constraints of such variable. This presentation focuses on the unknown size distribution of snow grain sizes. Our group developed a new approach to the `traditional' measurements of snow grain metrics where micro-photographs of snow grains are taken under angular directional LED lighting. The projected shadows are digitized so that a 3D reconstruction of the snow grains is possible. This device has been used in several field campaigns and over the years a very large dataset was collected and is presented in this paper. A total of 588 snow photographs from 107 snowpits collected during the European Space Agency (ESA) Cold Regions Hydrology high-resolution Observatory (CoReH2O) mission concept field campaign, in Churchill, Manitoba Canada (January - April 2010). Each of the 588 photographs was classified as: depth hoar, rounded, facets and precipitation particles. A total of 162,516 snow grains were digitized across the 588 photographs, averaging 263 grains/photo. Results include distribution histograms for 5 `size' metrics (projected area, perimeter, equivalent optical diameter, minimum axis and maximum axis), and 2 `shape' metrics (eccentricity, major/minor axis ratio). Different cumulative histograms are found between the grain types, and proposed fits are presented with the Kernel distribution function. Finally, a comparison with the Specific Surface Area (SSA) derived from reflectance values using the Infrared Integrating Sphere (IRIS) highlight different power statistical fits for the 5 `size' metrics.

  12. Investigation of photon beam models in heterogeneous media of modern radiotherapy.

    PubMed

    Ding, W; Johnston, P N; Wong, T P Y; Bubb, I F

    2004-06-01

    This study investigates the performance of photon beam models in dose calculations involving heterogeneous media in modern radiotherapy. Three dose calculation algorithms implemented in the CMS FOCUS treatment planning system have been assessed and validated using ionization chambers, thermoluminescent dosimeters (TLDs) and film. The algorithms include the multigrid superposition (MGS) algorithm, fast Fourier Transform Convolution (FFTC) algorithm and Clarkson algorithm. Heterogeneous phantoms used in the study consist of air cavities, lung analogue and an anthropomorphic phantom. Depth dose distributions along the central beam axis for 6 MV and 10 MV photon beams with field sizes of 5 cm x 5 cm and 10 cm x 10 cm were measured in the air cavity phantoms and lung analogue phantom. Point dose measurements were performed in the anthropomorphic phantom. Calculated results with three dose calculation algorithms were compared with measured results. In the air cavity phantoms, the maximum dose differences between the algorithms and the measurements were found at the distal surface of the air cavity with a 10 MV photon beam and a 5 cm x 5 cm field size. The differences were 3.8%. 24.9% and 27.7% for the MGS. FFTC and Clarkson algorithms. respectively. Experimental measurements of secondary electron build-up range beyond the air cavity showed an increase with decreasing field size, increasing energy and increasing air cavity thickness. The maximum dose differences in the lung analogue with 5 cm x 5 cm field size were found to be 0.3%. 4.9% and 6.9% for the MGS. FFTC and Clarkson algorithms with a 6 MV photon beam and 0.4%. 6.3% and 9.1% with a 10 MV photon beam, respectively. In the anthropomorphic phantom, the dose differences between calculations using the MGS algorithm and measurements with TLD rods were less than +/-4.5% for 6 MV and 10 MV photon beams with 10 cm x 10 cm field size and 6 MV photon beam with 5 cm x 5 cm field size, and within +/-7.5% for 10 MV with 5 cm x 5 cm field size, respectively. The FFTC and Clarkson algorithms overestimate doses at all dose points in the lung of the anthropomorphic phantom. In conclusion, the MGS is the most accurate dose calculation algorithm of investigated photon beam models. It is strongly recommended for implementation in modern radiotherapy with multiple small fields when heterogeneous media are in the treatment fields.

  13. Viscosity scaling in concentrated dispersions and its impact on colloidal aggregation.

    PubMed

    Nicoud, Lucrèce; Lattuada, Marco; Lazzari, Stefano; Morbidelli, Massimo

    2015-10-07

    Gaining fundamental knowledge about diffusion in crowded environments is of great relevance in a variety of research fields, including reaction engineering, biology, pharmacy and colloid science. In this work, we determine the effective viscosity experienced by a spherical tracer particle immersed in a concentrated colloidal dispersion by means of Brownian dynamics simulations. We characterize how the effective viscosity increases from the solvent viscosity for small tracer particles to the macroscopic viscosity of the dispersion when large tracer particles are employed. Our results show that the crossover between these two regimes occurs at a tracer particle size comparable to the host particle size. In addition, it is found that data points obtained in various host dispersions collapse on one master curve when the normalized effective viscosity is plotted as a function of the ratio between the tracer particle size and the mean host particle size. In particular, this master curve was obtained by varying the volume fraction, the average size and the polydispersity of the host particle distribution. Finally, we extend these results to determine the size dependent effective viscosity experienced by a fractal cluster in a concentrated colloidal system undergoing aggregation. We include this scaling of the effective viscosity in classical aggregation kernels, and we quantify its impact on the kinetics of aggregate growth as well as on the shape of the aggregate distribution by means of population balance equation calculations.

  14. Particle size distributions of lead measured in battery manufacturing and secondary smelter facilities and implications in setting workplace lead exposure limits.

    PubMed

    Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M

    2017-08-01

    Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., <1 µm, a particle size range associated with enhanced absorption of associated lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues associated with applying such data to set occupational exposure limits for lead.

  15. Impact Cratering Calculations

    NASA Technical Reports Server (NTRS)

    Ahrens, Thomas J.

    2001-01-01

    We examined the von Mises and Mohr-Coulomb strength models with and without damage effects and developed a model for dilatancy. The models and results are given in O'Keefe et al. We found that by incorporating damage into the models that we could in a single integrated impact calculation, starting with the bolide in the atmosphere produce final crater profiles having the major features found in the field measurements. These features included a central uplift, an inner ring, circular terracing and faulting. This was accomplished with undamaged surface strengths of approximately 0.1 GPa and at depth strengths of approximately 1.0 GPa. We modeled the damage in geologic materials using a phenomenological approach, which coupled the Johnson-Cook damage model with the CTH code geologic strength model. The objective here was not to determine the distribution of fragment sizes, but rather to determine the effect of brecciated and comminuted material on the crater evolution, fault production, ejecta distribution, and final crater morphology.

  16. Fresnel Lens Solar Concentrator Design Based on Geometric Optics and Blackbody Radiation Equations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Jayroe, Robert

    1998-01-01

    Fresnel lenses have been used for years as solar concentrators in a variety of applications. Several variables effect the final design of these lenses including: lens diameter, image spot distance from the lens, and bandwidth focused in the image spot. Defining the image spot as the geometrical optics circle of least confusion, a set of design equations has been derived to define the groove angles for each groove on the lens. These equations allow the distribution of light by wavelength within the image spot to be calculated. Combining these equations with the blackbody radiation equations, energy distribution, power, and flux within the image spot can be calculated. In addition, equations have been derived to design a lens to produce maximum flux in a given spot size. Using these equations, a lens may be designed to optimize the spot energy concentration for given energy source.

  17. Behaviours and influence factors of radon progeny in three typical dwellings.

    PubMed

    Li, Hongzhao; Zhang, Lei; Guo, Qiuju

    2011-03-01

    To investigate the behaviours and influence factors of radon progeny in rural dwellings in China, site measurements of radon equilibrium factor, unattached fraction and some important indoor environmental factors, such as aerosol concentration, aerosol size distribution and ventilation rate, were carried out in three typical types of dwellings, and a theoretical study was also performed synchronously. Good consistency between the results of site measurements and the theoretical calculation on equilibrium factor F and unattached fraction f(p) was achieved. Lower equilibrium factor and higher unattached fraction in mud or cave houses were found compared to those in brick houses, and it was suggested by the theoretical study that the smaller aerosol size distribution in mud or cave houses might be the main reason for what was observed. The dose conversion factor in the mud houses and the cave houses may be higher than that in brick houses.

  18. Assessing T cell clonal size distribution: a non-parametric approach.

    PubMed

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  19. Phenytoin crystal growth rates in the presence of phosphate and chloride ions

    NASA Astrophysics Data System (ADS)

    Zipp, G. L.; Rodríguez-Hornedo, N.

    1992-09-01

    Phenytoin crystal growth kinetics have been measured as a function of supersaturation in pH 2.2 phosphoric acid and pH 2.2 hydrochloric acid solutions. Two different methods were used for the kinetic analysis. The first involved a zone-sensing device which provided an analysis of the distribution of crystals in a batch crystallizer. Crystal growth rates were calculated from the increase in the size of the distribution with time. In the second method, growth rates were evaluated from the change in size with time of individual crystals observed under an inverted microscope. The results from each method compare favorably. The use of both techniques provides an excellent opportunity to exploit the strengths of each: an average growth rate from a population of crystals from batch crystallization and insight into the effect of growth on the morphology of the crystals from the individual crystal measurements.

  20. An expanded model and application of the combined effect of crystal-size distribution and crystal shape on the relative viscosity of magmas

    NASA Astrophysics Data System (ADS)

    Klein, Johannes; Mueller, Sebastian P.; Helo, Christoph; Schweitzer, Silja; Gurioli, Lucia; Castro, Jonathan M.

    2018-05-01

    This study examines the combined effect of crystal-size distributions (CSD) and crystal shape on the rheology of vesicle free magmatic suspensions and provides the first practical application of an empirical model to estimate the relative effect of crystal content and CSD's on the viscosity of magma directly from textural image analysis of natural rock samples in the form of a user-friendly texture-rheology spreadsheet calculator. We extend and apply established relationships between the maximum packing fraction ϕm of a crystal bearing suspension and both its rheological properties and the polydispersity γ of a CSD. By using analogue rotational rheometric experiments with glass fibres and glass flakes in silicone oil acting as magma equivalent, this study also provides new insights in the relationship between ϕm and the aspect ratio rp of suspended particles.

  1. Determining the Size of Pores in a Partially Transparent Ceramics from Total-Reflection Spectra

    NASA Astrophysics Data System (ADS)

    Mironov, R. A.; Zabezhailov, M. O.; Georgiu, I. F.; Cherepanov, V. V.; Rusin, M. Yu.

    2018-03-01

    A technique is proposed for determining the pore-size distribution based on measuring the dependence of total reflectance in the domain of partial transparency of a material. An assumption about equality of scattering-coefficient spectra determined by solving the inverse radiation transfer problem and by theoretical calculation with the Mie theory is used. The technique is applied to studying a quartz ceramics. The poresize distribution is also determined using mercury and gas porosimetry. All three methods are shown to produce close results for pores with diameters of <180 nm, which occupy 90% of the void volume. In the domain of pore dimensions of >180 nm, the methods show differences that might be related to both specific procedural features and the structural properties of ceramics. The spectral-scattering method has a number of advantages over traditional porosimetry, and it can be viewed as a routine industrial technique.

  2. Biomass burning dominates brown carbon absorption in the rural southeastern United States

    NASA Astrophysics Data System (ADS)

    Washenfelder, R. A.; Attwood, A. R.; Brock, C. A.; Guo, H.; Xu, L.; Weber, R. J.; Ng, N. L.; Allen, H. M.; Ayres, B. R.; Baumann, K.; Cohen, R. C.; Draper, D. C.; Duffey, K. C.; Edgerton, E.; Fry, J. L.; Hu, W. W.; Jimenez, J. L.; Palm, B. B.; Romer, P.; Stone, E. A.; Wooldridge, P. J.; Brown, S. S.

    2015-01-01

    carbon aerosol consists of light-absorbing organic particulate matter with wavelength-dependent absorption. Aerosol optical extinction, absorption, size distributions, and chemical composition were measured in rural Alabama during summer 2013. The field site was well located to examine sources of brown carbon aerosol, with influence by high biogenic organic aerosol concentrations, pollution from two nearby cities, and biomass burning aerosol. We report the optical closure between measured dry aerosol extinction at 365 nm and calculated extinction from composition and size distribution, showing agreement within experiment uncertainties. We find that aerosol optical extinction is dominated by scattering, with single-scattering albedo values of 0.94 ± 0.02. Black carbon aerosol accounts for 91 ± 9% of the total carbonaceous aerosol absorption at 365 nm, while organic aerosol accounts for 9 ± 9%. The majority of brown carbon aerosol mass is associated with biomass burning, with smaller contributions from biogenically derived secondary organic aerosol.

  3. Results of a comprehensive atmospheric aerosol-radiation experiment in the southwestern United States. I - Size distribution, extinction optical depth and vertical profiles of aerosols suspended in the atmosphere. II - Radiation flux measurements and

    NASA Technical Reports Server (NTRS)

    Deluisi, J. J.; Furukawa, F. M.; Gillette, D. A.; Schuster, B. G.; Charlson, R. J.; Porch, W. M.; Fegley, R. W.; Herman, B. M.; Rabinoff, R. A.; Twitty, J. T.

    1976-01-01

    Results are reported for a field test that was aimed at acquiring a sufficient set of measurements of aerosol properties required as input for radiative-transfer calculations relevant to the earth's radiation balance. These measurements include aerosol extinction and size distributions, vertical profiles of aerosols, and radiation fluxes. Physically consistent, vertically inhomogeneous models of the aerosol characteristics of a turbid atmosphere over a desert and an agricultural region are constructed by using direct and indirect sampling techniques. These results are applied for a theoretical interpretation of airborne radiation-flux measurements. The absorption term of the complex refractive index of aerosols is estimated, a regional variation in the refractive index is noted, and the magnitude of solar-radiation absorption by aerosols and atmospheric molecules is determined.

  4. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  5. SU-F-T-46: The Effect of Inter-Seed Attenuation and Tissue Composition in Prostate 125I Brachytherapy Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamura, K; Araki, F; Ohno, T

    Purpose: To investigate the difference of dose distributions with/without the effect of inter-seed attenuation and tissue compositions in prostate {sup 125}I brachytherapy dose calculations, using Monte Carlo simulations of Particle and Heavy Ion Transport code System (PHITS). Methods: The dose distributions in {sup 125}I prostate brachytherapy were calculated using PHITS for non-simultaneous and simultaneous alignments of STM1251 sources in water or prostate phantom for six patients. The PHITS input file was created from DICOM-RT file which includes source coordinates and structures for clinical target volume (CTV) and organs at risk (OARs) of urethra and rectum, using in-house Matlab software. Photonmore » and electron cutoff energies were set to 1 keV and 100 MeV, respectively. The dose distributions were calculated with the kerma approximation and the voxel size of 1 × 1 × 1 mm{sup 3}. The number of incident photon was set to be the statistical uncertainty (1σ) of less than 1%. The effect of inter-seed attenuation and prostate tissue compositions was evaluated from dose volume histograms (DVHs) for each structure, by comparing to results of the AAPM TG-43 dose calculation (without the effect of inter-seed attenuation and prostate tissue compositions). Results: The dose reduction due to the inter-seed attenuation by source capsules was approximately 2% for CTV and OARs compared to those of TG-43. In additions, by considering prostate tissue composition, the D{sub 90} and V{sub 100} of CTV reduced by 6% and 1%, respectively. Conclusion: It needs to consider the dose reduction due to the inter-seed attenuation and tissue composition in prostate {sup 125}I brachytherapy dose calculations.« less

  6. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  7. Optimization and evaluation of asymmetric flow field-flow fractionation of silver nanoparticles.

    PubMed

    Loeschner, Katrin; Navratilova, Jana; Legros, Samuel; Wagner, Stephan; Grombe, Ringo; Snell, James; von der Kammer, Frank; Larsen, Erik H

    2013-01-11

    Asymmetric flow field-flow fractionation (AF(4)) in combination with on-line optical detection and mass spectrometry is one of the most promising methods for separation and quantification of nanoparticles (NPs) in complex matrices including food. However, to obtain meaningful results regarding especially the NP size distribution a number of parameters influencing the separation need to be optimized. This paper describes the development of a separation method for polyvinylpyrrolidone-stabilized silver nanoparticles (AgNPs) in aqueous suspension. Carrier liquid composition, membrane material, cross flow rate and spacer height were shown to have a significant influence on the recoveries and retention times of the nanoparticles. Focus time and focus flow rate were optimized with regard to minimum elution of AgNPs in the void volume. The developed method was successfully tested for injected masses of AgNPs from 0.2 to 5.0 μg. The on-line combination of AF(4) with detection methods including ICP-MS, light absorbance and light scattering was helpful because each detector provided different types of information about the eluting NP fraction. Differences in the time-resolved appearance of the signals obtained by the three detection methods were explained based on the physical origin of the signal. Two different approaches for conversion of retention times of AgNPs to their corresponding sizes and size distributions were tested and compared, namely size calibration with polystyrene nanoparticles (PSNPs) and calculations of size based on AF(4) theory. Fraction collection followed by transmission electron microscopy was performed to confirm the obtained size distributions and to obtain further information regarding the AgNP shape. Characteristics of the absorbance spectra were used to confirm the presence of non-spherical AgNP. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Protic ammonium carboxylate ionic liquids: insight into structure, dynamics and thermophysical properties by alkyl group functionalization.

    PubMed

    Reddy, Th Dhileep N; Mallik, Bhabani S

    2017-04-19

    This study is aimed at characterising the structure, dynamics and thermophysical properties of five alkylammonium carboxylate ionic liquids (ILs) from classical molecular dynamics simulations. The structural features of these ILs were characterised by calculating the site-site radial distribution functions, g(r), spatial distribution functions and structure factors. The structural properties demonstrate that ILs show greater interaction between cations and anions when alkyl chain length increases on the cation or anion. In all ILs, spatial distribution functions show that the anion is close to the acidic hydrogen atoms of the ammonium cation. We determined the role of alkyl group functionalization of the charged entities, cations and anions, in the dynamical behavior and the transport coefficients of this family of ionic liquids. The dynamics of ILs are described by studying the mean square displacement (MSD) of the centres of mass of the ions, diffusion coefficients, ionic conductivities and hydrogen bonds as well as residence dynamics. The diffusion coefficients and ionic conductivity decrease with an increase in the size of the cation or anion. The effect of alkyl chain length on ionic conductivity calculated in this article is consistent with the findings of other experimental studies. Hydrogen bond lifetimes and residence times along with structure factors were also calculated, and are related to alkyl chain length.

  9. Ash3d: A finite-volume, conservative numerical model for ash transport and tephra deposition

    USGS Publications Warehouse

    Schwaiger, Hans F.; Denlinger, Roger P.; Mastin, Larry G.

    2012-01-01

    We develop a transient, 3-D Eulerian model (Ash3d) to predict airborne volcanic ash concentration and tephra deposition during volcanic eruptions. This model simulates downwind advection, turbulent diffusion, and settling of ash injected into the atmosphere by a volcanic eruption column. Ash advection is calculated using time-varying pre-existing wind data and a robust, high-order, finite-volume method. Our routine is mass-conservative and uses the coordinate system of the wind data, either a Cartesian system local to the volcano or a global spherical system for the Earth. Volcanic ash is specified with an arbitrary number of grain sizes, which affects the fall velocity, distribution and duration of transport. Above the source volcano, the vertical mass distribution with elevation is calculated using a Suzuki distribution for a given plume height, eruptive volume, and eruption duration. Multiple eruptions separated in time may be included in a single simulation. We test the model using analytical solutions for transport. Comparisons of the predicted and observed ash distributions for the 18 August 1992 eruption of Mt. Spurr in Alaska demonstrate to the efficacy and efficiency of the routine.

  10. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  11. Grain size of loess and paleosol samples: what are we measuring?

    NASA Astrophysics Data System (ADS)

    Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor

    2017-04-01

    Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (<5 µm). Significant differences between the Mie and Fraunhofer approaches were found for the finest grain size fractions, while only slight discrepancies were observed for the medium to coarse silt fractions. It should be noted that the different instruments provided different grain size distributions even with the exactly same optical settings. Image analysis-based grain size data indicated underestimation of clay and fine silt fractions compared to laser measurements. The measured circle-equivalent diameter of image analysis is calculated from the acquired two-dimensional image of the particle. It is assumed that the instantaneous pulse of compressed air disperse the sedimentary particles onto the glass slide with a consistent orientation with their largest area facing to the camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.

  12. Simulation of Electromigration Based on Resistor Networks

    NASA Astrophysics Data System (ADS)

    Patrinos, Anthony John

    A two dimensional computer simulation of electromigration based on resistor networks was designed and implemented. The model utilizes a realistic grain structure generated by the Monte Carlo method and takes specific account of the local effects through which electromigration damage progresses. The dynamic evolution of the simulated thin film is governed by the local current and temperature distributions. The current distribution is calculated by superimposing a two dimensional electrical network on the lattice whose nodes correspond to the particles in the lattice and the branches to interparticle bonds. Current is assumed to flow from site to site via nearest neighbor bonds. The current distribution problem is solved by applying Kirchhoff's rules on the resulting electrical network. The calculation of the temperature distribution in the lattice proceeds by discretizing the partial differential equation for heat conduction, with appropriate material parameters chosen for the lattice and its defects. SEReNe (for Simulation of Electromigration using Resistor Networks) was tested by applying it to common situations arising in experiments with real films with satisfactory results. Specifically, the model successfully reproduces the expected grain size, line width and bamboo effects, the lognormal failure time distribution and the relationship between current density exponent and current density. It has also been modified to simulate temperature ramp experiments but with mixed, in this case, results.

  13. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  14. Fractal Structures on Fe3O4 Ferrofluid: A Small-Angle Neutron Scattering Study

    NASA Astrophysics Data System (ADS)

    Giri Rachman Putra, Edy; Seong, Baek Seok; Shin, Eunjoo; Ikram, Abarrul; Ani, Sistin Ari; Darminto

    2010-10-01

    A small-angle neutron scattering (SANS) which is a powerful technique to reveal the large scale structures was applied to investigate the fractal structures of water-based Fe3O4ferrofluid, magnetic fluid. The natural magnetite Fe3O4 from iron sand of several rivers in East Java Province of Indonesia was extracted and purified using magnetic separator. Four different ferrofluid concentrations, i.e. 0.5, 1.0, 2.0 and 3.0 Molar (M) were synthesized through a co-precipitation method and then dispersed in tetramethyl ammonium hydroxide (TMAH) as surfactant. The fractal aggregates in ferrofluid samples were observed from their SANS scattering distributions confirming the correlations to their concentrations. The mass fractal dimension changed from about 3 to 2 as ferrofluid concentration increased showing a deviation slope at intermediate scattering vector q range. The size of primary magnetic particle as a building block was determined by fitting the scattering profiles with a log-normal sphere model calculation. The mean average size of those magnetic particles is about 60 - 100 Å in diameter with a particle size distribution σ = 0.5.

  15. The Exoplanet Cloud Atlas

    NASA Astrophysics Data System (ADS)

    Gao, Peter; Marley, Mark S.; Morley, Caroline; Fortney, Jonathan J.

    2017-10-01

    Clouds have been readily inferred from observations of exoplanet atmospheres, and there exists great variability in cloudiness between planets, such that no clear trend in exoplanet cloudiness has so far been discerned. Equilibrium condensation calculations suggest a myriad of species - salts, sulfides, silicates, and metals - could condense in exoplanet atmospheres, but how they behave as clouds is uncertain. The behavior of clouds - their formation, evolution, and equilibrium size distribution - is controlled by cloud microphysics, which includes processes such as nucleation, condensation, and evaporation. In this work, we explore the cloudy exoplanet phase space by using a cloud microphysics model to simulate a suite of cloud species ranging from cooler condensates such as KCl/ZnS, to hotter condensates like perovskite and corundum. We investigate how the cloudiness and cloud particle sizes of exoplanets change due to variations in temperature, metallicity, gravity, and cloud formation mechanisms, and how these changes may be reflected in current and future observations. In particular, we will evaluate where in phase space could cloud spectral features be observable using JWST MIRI at long wavelengths, which will be dependent on the cloud particle size distribution and cloud species.

  16. 3D brain tumor localization and parameter estimation using thermographic approach on GPU.

    PubMed

    Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi

    2018-01-01

    The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Recent progress on RE2O3-Mo/W emission materials.

    PubMed

    Wang, Jinshu; Zhang, Xizhu; Liu, Wei; Cui, Yuntao; Wang, Yiman; Zhou, Meiling

    2012-08-01

    RE2O3-Mo/W cathodes were prepared by powder metallurgy method. La2O3-Y2O3-Mo cermet cathodes prepared by traditional sintering method and spark plasma sintering (SPS) exhibit different secondary emission properties. The La2O3-Y2O3-Mo cermet cathode prepared by SPS method has smaller grain size and exhibits better secondary emission performance. Monte carlo calculation results indicate that the secondary electron emission way of the cathode correlates with the grain size. Decreasing the grain size can decrease the positive charging effect of RE2O3 and thus is favorable for the escaping of secondary electrons to vacuum. The Scandia doped tungsten matrix dispenser cathode with a sub-micrometer microstructure of matrix with uniformly distributed nanometer-particles of Scandia has good thermionic emission property. Over 100 A/cm2 full space charge limited current density can be obtained at 950Cb. The cathode surface is covered by a Ba-Sc-O active surface layer with nano-particles distributing mainly on growth steps of W grains, leads to the conspicuous emission property of the cathode.

  18. Comparison of wing-span averaging effects on lift, rolling moment, and bending moment for two span load distributions and for two turbulence representations

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1978-01-01

    An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.

  19. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  20. Global Evolution of Solid Matter in Turbulent Protoplanetry Disks. Part 1; Aerodynamics of Solid Particles

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Valageas, P.

    1996-01-01

    The problem of planetary system formation and its subsequent character can only be addressed by studying the global evolution of solid material entrained in gaseous protoplanetary disks. We start to investigate this problem by considering the space-time development of aerodynamic forces that cause solid particles to decouple from the gas. The aim of this work is to demonstrate that only the smallest particles are attached to the gas, or that the radial distribution of the solid matter has no momentary relation to the radial distribution of the gas. We present the illustrative example wherein a gaseous disk of 0.245 solar mass and angular momentum of 5.6 x 10(exp 52) g/sq cm/s is allowed to evolve due to turbulent viscosity characterized by either alpha = 10(exp -2) or alpha = 10(exp -3). The motion of solid particles suspended in a viscously evolving gaseous disk is calculated numerically for particles of different sizes. In addition we calculate the global evolution of single-sized, noncoagulating particles. We find that particles smaller than 0.1 cm move with the gas; larger particles have significant radial velocities relative to the gas. Particles larger than 0.1 cm but smaller than 10(exp 3) cm have inward radial velocities much larger than the gas, whereas particles larger than 10(exp 4) cm have inward velocities much smaller than the gas. A significant difference in the form of the radial distribution of solids and the gas develops with time. It is the radial distribution of solids, rather than the gas, that determines the character of an emerging planetary system.

Top