Sample records for fraction calculation methodology

  1. Propellant Mass Fraction Calculation Methodology for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.

  2. Propellant Mass Fraction Calculation Methodology for Launch Vehicles and Application to Ares Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.

  3. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Uncertainty in the delayed neutron fraction in fuel assembly depletion calculations

    NASA Astrophysics Data System (ADS)

    Aures, Alexander; Bostelmann, Friederike; Kodeli, Ivan A.; Velkov, Kiril; Zwermann, Winfried

    2017-09-01

    This study presents uncertainty and sensitivity analyses of the delayed neutron fraction of light water reactor and sodium-cooled fast reactor fuel assemblies. For these analyses, the sampling-based XSUSA methodology is used to propagate cross section uncertainties in neutron transport and depletion calculations. Cross section data is varied according to the SCALE 6.1 covariance library. Since this library includes nu-bar uncertainties only for the total values, it has been supplemented by delayed nu-bar uncertainties from the covariance data of the JENDL-4.0 nuclear data library. The neutron transport and depletion calculations are performed with the TRITON/NEWT sequence of the SCALE 6.1 package. The evolution of the delayed neutron fraction uncertainty over burn-up is analysed without and with the consideration of delayed nu-bar uncertainties. Moreover, the main contributors to the result uncertainty are determined. In all cases, the delayed nu-bar uncertainties increase the delayed neutron fraction uncertainty. Depending on the fuel composition, the delayed nu-bar values of uranium and plutonium in fact give the main contributions to the delayed neutron fraction uncertainty for the LWR fuel assemblies. For the SFR case, the uncertainty of the scattering cross section of U-238 is the main contributor.

  6. Different approaches to assess the environmental performance of a cow manure biogas plant

    NASA Astrophysics Data System (ADS)

    Torrellas, Marta; Burgos, Laura; Tey, Laura; Noguerol, Joan; Riau, Victor; Palatsi, Jordi; Antón, Assumpció; Flotats, Xavier; Bonmatí, August

    2018-03-01

    In intensive livestock production areas, farmers must apply manure management systems to comply with governmental regulations. Biogas plants, as a source of renewable energy, have the potential to reduce environmental impacts comparing with other manure management practices. Nevertheless, manure processing at biogas plants also incurs in non-desired gas emissions that should be considered. At present, available emission calculation methods cover partially emissions produced at a biogas plant, with the subsequent difficulty in the preparation of life cycle inventories. The objective of this study is to characterise gaseous emissions: ammonia (NH3-N), methane (CH4), nitrous oxide (N2Oindirect, and N2Odirect) and hydrogen sulphide (H2S) from the anaerobic co-digestion of cow manure by using different approaches for preparing gaseous emission inventories, and to compare the different methodologies used. The chosen scenario for the study is a biogas plant located next to a dairy farm in the North of Catalonia, Spain. Emissions were calculated by two methods: field measurements and estimation, following international guidelines. International Panel on Climate Change (IPCC) guidelines were adapted to estimate emissions for the specific situation according to Tier 1, Tier 2 and Tier 3 approaches. Total air emissions at the biogas plant were calculated from the emissions produced at the three main manure storage facilities on the plant: influent storage, liquid fraction storage, and the solid fraction storage of the digestate. Results showed that most of the emissions were produced in the liquid fraction storage. Comparing measured emissions with estimated emissions, NH3, CH4, N2Oindirect and H2S total emission results were in the same order of magnitude for both methodologies, while, N2Odirect total measured emissions were one order of magnitude higher than the estimates. A Monte Carlo analysis was carried out to examine the uncertainties of emissions determined from experimental data, providing probability distribution functions. Four emission inventories were developed with the different methodologies used. Estimation methods proved to be a useful tool to determine emissions when field sampling is not possible. Nevertheless, it was not possible to establish which methodology is more reliable. Therefore, more measurements at different biogas plants should be evaluated to validate the methodologies more precisely.

  7. Isotope-labelled urea to test colon drug delivery devices in vivo: principles, calculations and interpretations.

    PubMed

    Maurer, Marina J M; Schellekens, Reinout C A; Wutzke, Klaus D; Stellaard, Frans

    2013-01-01

    This paper describes various methodological aspects that were encountered during the development of a system to monitor the in vivo behaviour of a newly developed colon delivery device that enables oral drug treatment of inflammatory bowel diseases. [(13)C]urea was chosen as the marker substance. Release of [(13)C]urea in the ileocolonic region is proven by the exhalation of (13)CO2 in breath due to bacterial fermentation of [(13)C]urea. The (13)CO2 exhalation kinetics allows the calculation of a lag time as marker for delay of release, a pulse time as marker for the speed of drug release and the fraction of the dose that is fermented. To determine the total bioavailability, also the fraction of the dose absorbed from the intestine must be quantified. Initially, this was done by calculating the time-dependent [(13)C]urea appearance in the body urea pool via measurement of (13)C abundance and concentration of plasma urea. Thereafter, a new methodology was successfully developed to obtain the bioavailability data by measurement of the urinary excretion rate of [(13)C]urea. These techniques required two experimental days, one to test the coated device, another to test the uncoated device to obtain reference values for the situation that 100 % of [(13)C]urea is absorbed. This is hampered by large day-to-day variations in urea metabolism. Finally, a completely non-invasive, one-day test was worked out based on a dual isotope approach applying a simultaneous administration of [(13)C]urea in a coated device and [(15)N2]urea in an uncoated device. All aspects of isotope-related analytical methodologies and required calculation and correction systems are described.

  8. Investigating the Energetic Ordering of Stable and Metastable TiO 2 Polymorphs Using DFT+ U and Hybrid Functionals

    DOE PAGES

    Curnan, Matthew T.; Kitchin, John R.

    2015-08-12

    Prediction of transition metal oxide BO 2 (B = Ti, V, etc.) polymorph energetic properties is critical to tunable material design and identifying thermodynamically accessible structures. Determining procedures capable of synthesizing particular polymorphs minimally requires prior knowledge of their relative energetic favorability. Information concerning TiO 2 polymorph relative energetic favorability has been ascertained from experimental research. In this study, the consistency of first-principles predictions and experimental results involving the relative energetic ordering of stable (rutile), metastable (anatase and brookite), and unstable (columbite) TiO 2 polymorphs is assessed via density functional theory (DFT). Considering the issues involving electron–electron interaction and chargemore » delocalization in TiO 2 calculations, relative energetic ordering predictions are evaluated over trends varying Ti Hubbard U 3d or exact exchange fraction parameter values. Energetic trends formed from varying U 3d predict experimentally consistent energetic ordering over U 3d intervals when using GGA-based functionals, regardless of pseudopotential selection. Given pertinent linear response calculated Hubbard U values, these results enable TiO 2 polymorph energetic ordering prediction. Here, the hybrid functional calculations involving rutile–anatase relative energetics, though demonstrating experimentally consistent energetic ordering over exact exchange fraction ranges, are not accompanied by predicted fractions, for a first-principles methodology capable of calculating exact exchange fractions precisely predicting TiO 2 polymorph energetic ordering is not available.« less

  9. A Five-Dimensional Mathematical Model for Regional and Global Changes in Cardiac Uptake and Motion

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, M. A.; Gifford, H. C.

    2004-10-01

    The objective of this work was to simultaneously introduce known regional changes in contraction pattern and perfusion to the existing gated Mathematical Cardiac Torso (MCAT) phantom heart model. We derived a simple integral to calculate the fraction of the ellipsoidal volume that makes up the left ventricle (LV), taking into account the stationary apex and the moving base. After calculating the LV myocardium volume of the existing beating heart model, we employed the property of conservation of mass to manipulate the LV ejection fraction to values ranging between 13.5% and 68.9%. Multiple dynamic heart models that differ in degree of LV wall thickening, base-to-apex motion, and ejection fraction, are thus available for use with the existing MCAT methodology. To introduce more complex regional LV contraction and perfusion patterns, we used composites of dynamic heart models to create a central region with little or no motion or perfusion, surrounded by a region in which the motion and perfusion gradually reverts to normal. To illustrate this methodology, the following gated cardiac acquisitions for different clinical situations were simulated analytically: 1) reduced regional motion and perfusion; 2) same perfusion as in (1) without motion intervention; and 3) washout from the normal and diseased myocardial regions. Both motion and perfusion can change dynamically during a single rotation or multiple rotations of a simulated single-photon emission computed tomography acquisition system.

  10. Design principles for radiation-resistant solid solutions

    NASA Astrophysics Data System (ADS)

    Schuler, Thomas; Trinkle, Dallas R.; Bellon, Pascal; Averback, Robert

    2017-05-01

    We develop a multiscale approach to quantify the increase in the recombined fraction of point defects under irradiation resulting from dilute solute additions to a solid solution. This methodology provides design principles for radiation-resistant materials. Using an existing database of solute diffusivities, we identify Sb as one of the most efficient solutes for this purpose in a Cu matrix. We perform density-functional-theory calculations to obtain binding and migration energies of Sb atoms, vacancies, and self-interstitial atoms in various configurations. The computed data informs the self-consistent mean-field formalism to calculate transport coefficients, allowing us to make quantitative predictions of the recombined fraction of point defects as a function of temperature and irradiation rate using homogeneous rate equations. We identify two different mechanisms according to which solutes lead to an increase in the recombined fraction of point defects; at low temperature, solutes slow down vacancies (kinetic effect), while at high temperature, solutes stabilize vacancies in the solid solution (thermodynamic effect). Extension to other metallic matrices and solutes are discussed.

  11. Molecular simulation of aqueous electrolyte solubility. 2. Osmotic ensemble Monte Carlo methodology for free energy and solubility calculations and application to NaCl.

    PubMed

    Moučka, Filip; Lísal, Martin; Škvor, Jiří; Jirsák, Jan; Nezbeda, Ivo; Smith, William R

    2011-06-23

    We present a new and computationally efficient methodology using osmotic ensemble Monte Carlo (OEMC) simulation to calculate chemical potential-concentration curves and the solubility of aqueous electrolytes. The method avoids calculations for the solid phase, incorporating readily available data from thermochemical tables that are based on well-defined reference states. It performs simulations of the aqueous solution at a fixed number of water molecules, pressure, temperature, and specified overall electrolyte chemical potential. Insertion/deletion of ions to/from the system is implemented using fractional ions, which are coupled to the system via a coupling parameter λ that varies between 0 (no interaction between the fractional ions and the other particles in the system) and 1 (full interaction between the fractional ions and the other particles of the system). Transitions between λ-states are accepted with a probability following from the osmotic ensemble partition function. Biasing weights associated with the λ-states are used in order to efficiently realize transitions between them; these are determined by means of the Wang-Landau method. We also propose a novel scaling procedure for λ, which can be used for both nonpolarizable and polarizable models of aqueous electrolyte systems. The approach is readily extended to involve other solvents, multiple electrolytes, and species complexation reactions. The method is illustrated for NaCl, using SPC/E water and several force field models for NaCl from the literature, and the results are compared with experiment at ambient conditions. Good agreement is obtained for the chemical potential-concentration curve and the solubility prediction is reasonable. Future improvements to the predictions will require improved force field models.

  12. Risk ranking of LANL nuclear material storage containers for repackaging prioritization.

    PubMed

    Smith, Paul H; Jordan, Hans; Hoffman, Jenifer A; Eller, P Gary; Balkey, Simon

    2007-05-01

    Safe handling and storage of nuclear material at U.S. Department of Energy facilities relies on the use of robust containers to prevent container breaches and subsequent worker contamination and uptake. The U.S. Department of Energy has no uniform requirements for packaging and storage of nuclear materials other than those declared excess and packaged to DOE-STD-3013-2000. This report describes a methodology for prioritizing a large inventory of nuclear material containers so that the highest risk containers are repackaged first. The methodology utilizes expert judgment to assign respirable fractions and reactivity factors to accountable levels of nuclear material at Los Alamos National Laboratory. A relative risk factor is assigned to each nuclear material container based on a calculated dose to a worker due to a failed container barrier and a calculated probability of container failure based on material reactivity and container age. This risk-based methodology is being applied at LANL to repackage the highest risk materials first and, thus, accelerate the reduction of risk to nuclear material handlers.

  13. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  14. Validation and evaluation of an HPLC methodology for the quantification of the potent antimitotic compound (+)-discodermolide in the Caribbean marine sponge Discodermia dissoluta.

    PubMed

    Valderrama, Katherine; Castellanos, Leonardo; Zea, Sven

    2010-08-01

    The sponge Discodermia dissoluta is the source of the potent antimitotic compound (+)-discodermolide. The relatively abundant and shallow populations of this sponge in Santa Marta, Colombia, allow for studies to evaluate the natural and biotechnological supply options of (+)-discodermolide. In this work, an RP-HPLC-UV methodology for the quantification of (+)-discodermolide from sponge samples was tested and validated. Our protocol for extracting this compound from the sponge included lyophilization, exhaustive methanol extraction, partitioning using water and dichloromethane, purification of the organic fraction in RP-18 cartridges and then finally retrieving the (+)-discodermolide in the methanol-water (80:20 v/v) fraction. This fraction was injected into an HPLC system with an Xterra RP-18 column and a detection wavelength of 235 nm. The calibration curve was linear, making it possible to calculate the LODs and quantification in these experiments. The intra-day and inter-day precision showed relative standard deviations lower than 5%. The accuracy, determined as the percentage recovery, was 99.4%. Nine samples of the sponge from the Bahamas, Bonaire, Curaçao and Santa Marta had concentrations of (+)-discodermolide ranging from 5.3 to 29.3 microg/g(-1) of wet sponge. This methodology is quick and simple, allowing for the quantification in sponges from natural environments, in situ cultures or dissociated cells.

  15. In vitro versus in vivo protein digestibility techniques for calculating PDCAAS (protein digestibility-corrected amino acid score) applied to chickpea fractions.

    PubMed

    Tavano, Olga Luisa; Neves, Valdir Augusto; da Silva Júnior, Sinézio Inácio

    2016-11-01

    Seven different in vitro methods to determine the protein digestibility for chickpea proteins were considered and also the application of these methodologies for calculating PDCAAS (protein digestibility-corrected amino acid score), seeking their correlations with the in vivo methodology. In vitro digestibility of raw and heated samples were determined using pepsin-pancreatin hydrolysis, considering soluble nitrogen via Kjeldahl (ppKJ) and hydrolysed peptide linkages using trinitrobenzenesulfonic acid and o-phthaldialdehyde. In vitro digestibility was also determined using trypsin, chymotrypsin and peptidase (3-Enz) or trypsin, chymotrypsin, peptidase and pronase solution (4-Enz). None of the correlations between in vitro and in vivo digestibilities were significant (at p<0.0500), but, strong correlations were observed between PDCAAS calculated by in vitro and in vivo results. PDCAAS-ppKJ, PDCAAS-3-Enz and PDCAAS-4-Enz presented the highest correlations with in vivo method, r=0.9316, 0.9442 and 0.9649 (p<0.0500), respectively. The use of in vitro methods for calculating PDCAAS may be promising and deserves more discussions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Modeling and characterization of as-welded microstructure of solid solution strengthened Ni-Cr-Fe alloys resistant to ductility-dip cracking part I: Numerical modeling

    NASA Astrophysics Data System (ADS)

    Unfried-Silgado, Jimy; Ramirez, Antonio J.

    2014-03-01

    This work aims the numerical modeling and characterization of as-welded microstructure of Ni-Cr-Fe alloys with additions of Nb, Mo and Hf as a key to understand their proven resistance to ductility-dip cracking. Part I deals with as-welded structure modeling, using experimental alloying ranges and Calphad methodology. Model calculates kinetic phase transformations and partitioning of elements during weld solidification using a cooling rate of 100 K.s-1, considering their consequences on solidification mode for each alloy. Calculated structures were compared with experimental observations on as-welded structures, exhibiting good agreement. Numerical calculations estimate an increase by three times of mass fraction of primary carbides precipitation, a substantial reduction of mass fraction of M23C6 precipitates and topologically closed packed phases (TCP), a homogeneously intradendritic distribution, and a slight increase of interdendritic Molybdenum distribution in these alloys. Incidences of metallurgical characteristics of modeled as-welded structures on desirable characteristics of Ni-based alloys resistant to DDC are discussed here.

  17. Quantitative determination of the clustered silicon concentration in substoichiometric silicon oxide layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spinella, Corrado; Bongiorno, Corrado; Nicotra, Giuseppe

    2005-07-25

    We present an analytical methodology, based on electron energy loss spectroscopy (EELS) and energy-filtered transmission electron microscopy, which allows us to quantify the clustered silicon concentration in annealed substoichiometric silicon oxide layers, deposited by plasma-enhanced chemical vapor deposition. The clustered Si volume fraction was deduced from a fit to the experimental EELS spectrum using a theoretical description proposed to calculate the dielectric function of a system of spherical particles of equal radii, located at random in a host material. The methodology allowed us to demonstrate that the clustered Si concentration is only one half of the excess Si concentration dissolvedmore » in the layer.« less

  18. A model for inventory of ammonia emissions from agriculture in the Netherlands

    NASA Astrophysics Data System (ADS)

    Velthof, G. L.; van Bruggen, C.; Groenestein, C. M.; de Haan, B. J.; Hoogeveen, M. W.; Huijsmans, J. F. M.

    2012-01-01

    Agriculture is the major source of ammonia (NH 3). Methodologies are needed to quantify national NH 3 emissions and to identify the most effective options to mitigate NH 3 emissions. Generally, NH 3 emissions from agriculture are quantified using a nitrogen (N) flow approach, in which the NH 3 emission is calculated from the N flows and NH 3 emission factors. Because of the direct dependency between NH 3 volatilization and Total Ammoniacal N (TAN; ammonium-N + N compounds readily broken down to ammonium) an approach based on TAN is preferred to calculate NH 3 emission instead of an approach based on total N. A TAN-based NH 3-inventory model was developed, called NEMA (National Emission Model for Ammonia). The total N excretion and the fraction of TAN in the excreted N are calculated from the feed composition and N digestibility of the components. TAN-based emission factors were derived or updated for housing systems, manure storage outside housing, manure application techniques, N fertilizer types, and grazing. The NEMA results show that the total NH 3 emission from agriculture in the Netherlands in 2009 was 88.8 Gg NH 3-N, of which 50% from housing, 37% from manure application, 9% from mineral N fertilizer, 3% from outside manure storage, and 1% from grazing. Cattle farming was the dominant source of NH 3 in the Netherlands (about 50% of the total NH 3 emission). The NH 3 emission expressed as percentage of the excreted N was 22% of the excreted N for poultry, 20% for pigs, 15% for cattle, and 12% for other livestock, which is mainly related to differences in emissions from housing systems. The calculated ammonia emission was most sensitive to changes in the fraction of TAN in the excreted manure and to the emission factor of manure application. From 2011, NEMA will be used as official methodology to calculate the national NH 3 emission from agriculture in the Netherlands.

  19. SOM quality and phosphorus fractionation to evaluate degradation organic matter: implications for the restoration of soils after fire

    NASA Astrophysics Data System (ADS)

    Merino, Agustin; Fonturbel, Maria T.; Omil, Beatriz; Chávez-Vergara, Bruno; Fernandez, Cristina; Garcia-Oliva, Felipe; Vega, Jose A.

    2016-04-01

    The design of emergency treatment for the rehabilitation of fire-affected soils requires a quick diagnosis to assess the degree of degradation. For its implication in the erosion and subsequent evolution, the quality of soil organic matter (OM) plays a particularly important role. This paper presents a methodology that combines the visual recognition of the severity of soil burning with the use of simple analytical techniques to assess the degree of degradation of OM. The content and quality of the OM was evaluated in litter and mineral soils using thermogravimetry-differential scanning calorimetry (DSC-TG) spectroscopy, and the results were contrasted with 13C CP-MAS NMR. The types of methodologies were texted to assess the thermal analysis: a) the direct calculation of the Q areas related to three degrees of thermal stabilities: Q1 (200-375 °C; labil OM); Q2 (375-475 °C, recalcitrant OM); and Q3 (475-550 °C). b) deconvolution of DSC curves and calculation of each peak was expressed as a fraction of the total DSC curve area. Additionally, a P fractionation was done following the Hedley sequential extraction method. The severity levels visually showed different degrees of SOM degradation. Although the fire caused important SOM losses in moderate severities, changes in the quality of OM only occurred at higher severities. Besides, the labile organic P fraction decreased and the occluded inorganic P fraction increased in the high severity soils. These changes affect the OM processes such as hydrophobicity and erosion largely responsible for soil degradation post-fire. The strong correlations between the thermal parameters and NMR regions and derived measurements such as hydrophobicity and aromaticity show the usefulness of this technique as rapid diagnosis to assess the soil degradation.The marked loss of polysaccharide and transition to highly thermic-resistant compounds, visible in deconvoluted thermograms, which would explain the changes in microbial activity and soil nutrients availability (basal respiration, microbial biomass, qCO2, and enzymatic activity). And also it would have implications in hydrophobicity and stability of soil aggregates, leading to the extreme erosion rates that occur usually are found in soils affected by higher severities.

  20. Calculation of the fractional interstitial component of boron diffusion and segregation coefficient of boron in Si0.8Ge0.2

    NASA Astrophysics Data System (ADS)

    Fang, Tilden T.; Fang, Wingra T. C.; Griffin, Peter B.; Plummer, James D.

    1996-02-01

    Investigation of boron diffusion in strained silicon germanium buried layers reveals a fractional interstitial component of boron diffusion (fBI) in Se0.8Ge0.2 approximately equal to the fBI value in silicon. In conjunction with computer-simulated boron profiles, the results yield an absolute lower-bound of fBI in Si0.8Ge0.2 of ˜0.8. In addition, the experimental methodology provides a unique vehicle for measuring the segregation coefficient; oxidation-enhanced diffusion is used instead of an extended, inert anneal to rapidly diffuse the dopant to equilibrium levels across the interface, allowing the segregation coefficient to be measured more quickly.

  1. Determination of fetal DNA fraction from the plasma of pregnant women using sequence read counts.

    PubMed

    Kim, Sung K; Hannum, Gregory; Geis, Jennifer; Tynan, John; Hogg, Grant; Zhao, Chen; Jensen, Taylor J; Mazloom, Amin R; Oeth, Paul; Ehrich, Mathias; van den Boom, Dirk; Deciu, Cosmin

    2015-08-01

    This study introduces a novel method, referred to as SeqFF, for estimating the fetal DNA fraction in the plasma of pregnant women and to infer the underlying mechanism that allows for such statistical modeling. Autosomal regional read counts from whole-genome massively parallel single-end sequencing of circulating cell-free DNA (ccfDNA) from the plasma of 25 312 pregnant women were used to train a multivariate model. The pretrained model was then applied to 505 pregnant samples to assess the performance of SeqFF against known methodologies for fetal DNA fraction calculations. Pearson's correlation between chromosome Y and SeqFF for pregnancies with male fetuses from two independent cohorts ranged from 0.932 to 0.938. Comparison between a single-nucleotide polymorphism-based approach and SeqFF yielded a Pearson's correlation of 0.921. Paired-end sequencing suggests that shorter ccfDNA, that is, less than 150 bp in length, is nonuniformly distributed across the genome. Regions exhibiting an increased proportion of short ccfDNA, which are more likely of fetal origin, tend to provide more information in the SeqFF calculations. SeqFF is a robust and direct method to determine fetal DNA fraction. Furthermore, the method is applicable to both male and female pregnancies and can greatly improve the accuracy of noninvasive prenatal testing for fetal copy number variation. © 2015 John Wiley & Sons, Ltd.

  2. SU-G-JeP3-01: A Method to Quantify Lung SBRT Target Localization Accuracy Based On Digitally Reconstructed Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lafata, K; Ren, L; Cai, J

    2016-06-15

    Purpose: To develop a methodology based on digitally-reconstructed-fluoroscopy (DRF) to quantitatively assess target localization accuracy of lung SBRT, and to evaluate using both a dynamic digital phantom and a patient dataset. Methods: For each treatment field, a 10-phase DRF is generated based on the planning 4DCT. Each frame is pre-processed with a morphological top-hat filter, and corresponding beam apertures are projected to each detector plane. A template-matching algorithm based on cross-correlation is used to detect the tumor location in each frame. Tumor motion relative beam aperture is extracted in the superior-inferior direction based on each frame’s impulse response to themore » template, and the mean tumor position (MTP) is calculated as the average tumor displacement. The DRF template coordinates are then transferred to the corresponding MV-cine dataset, which is retrospectively filtered as above. The treatment MTP is calculated within each field’s projection space, relative to the DRF-defined template. The field’s localization error is defined as the difference between the DRF-derived-MTP (planning) and the MV-cine-derived-MTP (delivery). A dynamic digital phantom was used to assess the algorithm’s ability to detect intra-fractional changes in patient alignment, by simulating different spatial variations in the MV-cine and calculating the corresponding change in MTP. Inter-and-intra-fractional variation, IGRT accuracy, and filtering effects were investigated on a patient dataset. Results: Phantom results demonstrated a high accuracy in detecting both translational and rotational variation. The lowest localization error of the patient dataset was achieved at each fraction’s first field (mean=0.38mm), with Fx3 demonstrating a particularly strong correlation between intra-fractional motion-caused localization error and treatment progress. Filtering significantly improved tracking visibility in both the DRF and MV-cine images. Conclusion: We have developed and evaluated a methodology to quantify lung SBRT target localization accuracy based on digitally-reconstructed-fluoroscopy. Our approach may be useful in potentially reducing treatment margins to optimize lung SBRT outcomes. R01-184173.« less

  3. Size-Dependency of the Surface Ligand Density of Liposomes Prepared by Post-insertion.

    PubMed

    Lee, Shang-Hsuan; Sato, Yusuke; Hyodo, Mamoru; Harashima, Hideyoshi

    2017-01-01

    In the active targeting of a drug delivery system (DDS), the density of the ligand on the functionalized liposome determines its affinity for binding to the target. To evaluate these densities on the surface of different sized liposomes, 4 liposomes with various diameters (188, 137, 70, 40 nm) were prepared and their surfaces were modified with fluorescently labeled ligand-lipid conjugates by the post-insertion method. Each liposomal mixture was fractionated into a series of fractions using size exclusion chromatography (SEC), and the resulting liposome fractions were precisely analyzed and the surface ligand densities calculated. The data collected using this methodology indicate that the density of the ligand on a particle is greatly dependent on the size of the liposome. This, in turn, indicates that smaller liposomes (75-40 nm) tend to possess higher densities. For developing active targeting systems, size and the density of the ligands are two important and independent factors that can affect the efficiency of a system as it relates to medical use.

  4. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  5. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  6. SU-F-BRD-04: Robustness Analysis of Proton Breast Treatments Using An Alpha-Stable Distribution Parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Heuvel, F; Hackett, S; Fiorini, F

    Purpose: Currently, planning systems allow robustness calculations to be performed, but a generalized assessment methodology is not yet available. We introduce and evaluate a methodology to quantify the robustness of a plan on an individual patient basis. Methods: We introduce the notion of characterizing a treatment instance (i.e. one single fraction delivery) by describing the dose distribution within an organ as an alpha-stable distribution. The parameters of the distribution (shape(α), scale(γ), position(δ), and symmetry(β)), will vary continuously (in a mathematical sense) as the distributions change with the different positions. The rate of change of the parameters provides a measure ofmore » the robustness of the treatment. The methodology is tested in a planning study of 25 patients with known residual errors at each fraction. Each patient was planned using Eclipse with an IBA-proton beam model. The residual error space for every patient was sampled 30 times, yielding 31 treatment plans for each patient and dose distributions in 5 organs. The parameters’ change rate as a function of Euclidean distance from the original plan was analyzed. Results: More than 1,000 dose distributions were analyzed. For 4 of the 25 patients the change in scale rate (γ) was considerably higher than the lowest change rate, indicating a lack of robustness. The sign of the shape change rate (α) also seemed indicative but the experiment lacked the power to prove significance. Conclusion: There are indications that this robustness measure is a valuable tool to allow a more patient individualized approach to the determination of margins. In a further study we will also evaluate this robustness measure using photon treatments, and evaluate the impact of using breath hold techniques, and the a Monte Carlo based dose deposition calculation. A principle component analysis is also planned.« less

  7. Relationship between the water-exchangeable fraction of PAH and the organic matter composition of sediments.

    PubMed

    Belles, Angel; Alary, Claire; Mamindy-Pajany, Yannick; Abriak, Nor-Edine

    2016-12-01

    The sorption of PAH on 12 different sediments was investigated and was correlated to their corresponding organic matter (OM) content and quality. For this purpose, the OM was precisely characterized using thermal analysis consisting in the successive combustion and quantification of the increasingly thermostable fractions of the OM. Simultaneously, the water-exchangeable fraction of the sorbed PAH defined as the amount of PAH freely exchanged between the water and the sediment (by opposition to the PAH harshly sorbed to the sediments particles) was determined using a passive sampler methodology recently developed. The water concentrations, when the sediment-water system is equilibrated, were also assessed which allows the determination of the sediment-water distribution coefficients without artifacts introduced by the non water-exchangeable fraction of PAH. Hence, the present study provides the distribution coefficients of PAH between the water and 4 different OM fractions combusted at a specific temperature range. The calculated distribution coefficients demonstrate that the sedimentary OM combusted at the intermediate temperature range (between 300 °C and 450 °C) drives the reversible sorption of PAH while the inferred sorption to the OM combusted at a lower and higher temperature range does not dominate the partitioning process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A methodology for the assessment of inhalation exposure to aluminium from antiperspirant sprays.

    PubMed

    Schwarz, Katharina; Pappa, Gerlinde; Miertsch, Heike; Scheel, Julia; Koch, Wolfgang

    2018-04-01

    Inhalative exposure can occur accidentally when using cosmetic spray products. Usually, a tiered approach is applied for exposure assessment, starting with rather conservative, simplistic calculation models that may be improved with measured data and more refined modelling. Here we report on an advanced methodology to mimic in-use conditions for antiperspirant spray products to provide a more accurate estimate of the amount of aluminium possibly inhaled and taken up systemically, thus contributing to the overall body burden. Four typical products were sprayed onto a skin surrogate in defined rooms. For aluminium, size-related aerosol release fractions, i.e. inhalable, thoracic and respirable, were determined by a mass balance method taking droplet maturation into account. These data were included into a simple two-box exposure model, allowing calculation of the inhaled aluminium dose over 12 min. Systemic exposure doses were calculated for exposure of the deep lung and the upper respiratory tract using the Multiple Path Particle Deposition Model (MPPD) model. The total systemically available dose of aluminium was in all cases found to be less than 0.5 µg per application. With this study it could be demonstrated that refinement of the input data of the two-box exposure model with measured data of released airborne aluminium is a valuable approach to analyse the contribution of antiperspirant spray inhalation to total aluminium exposure as part of the overall risk assessment. We suggest the methodology which can also be applied to other exposure modelling approaches for spray products, and further is adapted to other similar use scenarios.

  9. Phase equilibria modeling in igneous petrology: use of COMAGMAT model for simulating fractionation of ferro-basaltic magmas and the genesis of high-alumina basalt

    NASA Astrophysics Data System (ADS)

    Ariskin, Alexei A.

    1999-05-01

    A new version of COMAGMAT-3.5 model designed for computer simulations of equilibrium and fractional crystallization of basaltic magmas at low to high pressures is presented. The most important modifications of COMAGMAT include an ability to calculate more accurately the crystallization of magnetite and ilmenite, allowing the user to study numerically the effect of oxygen fugacity on basalt magma fractionation trends. Methodological principles of the use of COMAGMAT were discussed based on its thermodynamical and empirical basis, including specific details of the model calibration. Using COMAGMAT-3.5 a set of phase equilibria calculations (called Geochemical Thermometry) has been conducted for six cumulative rocks from the Marginal Border Series of the Skaergaard intrusion. As a result, initial magma temperature (1165±10°C) and trapped melt composition proposed to be parental magma to the Skaergaard intrusion were determined. Computer simulations of perfect fractionation of this composition as well as another proposed parent produced petrochemical trends opposite to those followed from natural observations. This is interpreted as evidence for an initial Skaergaard magma containing a large amount of olivine and plagioclase crystals (about 40-45%), so that the proposed and calculated parents are related through the melt trapped in the crystal-liquid mixture. This promotes the conclusion that the Skaergaard magma fractionation process was intermediate between equilibrium and fractional crystallization. In this case the classic Wager's trend should be considered an exception rather than a rule for the differentiation of ferro-basaltic magmas. A polybaric version of COMAGMAT has been applied for the genetic interpretation of a volcanic suite from the Klyuchevskoi volcano, Kamchatka, Russia. To identify petrological processes responsible for the observed suite ranging from high-magnesia to high-alumina basalts, we used the model to simulate the Klyuchevskoi suite assuming isobaric crystallization of a parental HMB magma at a variety of pressures and a separate set of simulations assuming fractionation during continuous magma ascent from a depth of 60 km. These results indicate that the Klyuchevskoi trend can be produced by ˜40% fractionation of Ol-Aug-Sp±Opx assemblages during ascent of the parental HMB magma over the pressure range 19-7 kbar with the rate of decompression being 0.33 kbar/% crystallized (at 1350-1110°C), with ˜2 wt.% of H 2O in the initial melt and ˜3 wt.% of H 2O in the resultant high-Al basalt.

  10. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  11. H2-norm for mesh optimization with application to electro-thermal modeling of an electric wire in automotive context

    NASA Astrophysics Data System (ADS)

    Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia

    2017-04-01

    In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.

  12. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    NASA Astrophysics Data System (ADS)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  13. Euler-euler anisotropic gaussian mesoscale simulation of homogeneous cluster-induced gas-particle turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.; Feng, Heng

    An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less

  14. Euler-euler anisotropic gaussian mesoscale simulation of homogeneous cluster-induced gas-particle turbulence

    DOE PAGES

    Kong, Bo; Fox, Rodney O.; Feng, Heng; ...

    2017-02-16

    An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less

  15. Expanded uncertainty associated with determination of isotope enrichment factors: Comparison of two point calculation and Rayleigh-plot.

    PubMed

    Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S

    2018-01-01

    The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Ablation, Thermal Response, and Chemistry Program for Analysis of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Chen, Yih-Kanq

    2010-01-01

    In previous work, the authors documented the Multicomponent Ablation Thermochemistry (MAT) and Fully Implicit Ablation and Thermal response (FIAT) programs. In this work, key features from MAT and FIAT were combined to create the new Fully Implicit Ablation, Thermal response, and Chemistry (FIATC) program. FIATC is fully compatible with FIAT (version 2.5) but has expanded capabilities to compute the multispecies surface chemistry and ablation rate as part of the surface energy balance. This new methodology eliminates B' tables, provides blown species fractions as a function of time, and enables calculations that would otherwise be impractical (e.g. 4+ dimensional tables) such as pyrolysis and ablation with kinetic rates or unequal diffusion coefficients. Equations and solution procedures are presented, then representative calculations of equilibrium and finite-rate ablation in flight and ground-test environments are discussed.

  17. Free Energy Perturbation Calculations of the Thermodynamics of Protein Side-Chain Mutations.

    PubMed

    Steinbrecher, Thomas; Abel, Robert; Clark, Anthony; Friesner, Richard

    2017-04-07

    Protein side-chain mutation is fundamental both to natural evolutionary processes and to the engineering of protein therapeutics, which constitute an increasing fraction of important medications. Molecular simulation enables the prediction of the effects of mutation on properties such as binding affinity, secondary and tertiary structure, conformational dynamics, and thermal stability. A number of widely differing approaches have been applied to these predictions, including sequence-based algorithms, knowledge-based potential functions, and all-atom molecular mechanics calculations. Free energy perturbation theory, employing all-atom and explicit-solvent molecular dynamics simulations, is a rigorous physics-based approach for calculating thermodynamic effects of, for example, protein side-chain mutations. Over the past several years, we have initiated an investigation of the ability of our most recent free energy perturbation methodology to model the thermodynamics of protein mutation for two specific problems: protein-protein binding affinities and protein thermal stability. We highlight recent advances in the field and outline current and future challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Primordial helium abundance determination using sulphur as metallicity tracer

    NASA Astrophysics Data System (ADS)

    Fernández, Vital; Terlevich, Elena; Díaz, Angeles I.; Terlevich, Roberto; Rosales-Ortega, F. F.

    2018-05-01

    The primordial helium abundance YP is calculated using sulphur as metallicity tracer in the classical methodology (with YP as an extrapolation of Y to zero metals). The calculated value, YP, S = 0.244 ± 0.006, is in good agreement with the estimate from the Planck experiment, as well as, determinations in the literature using oxygen as the metallicity tracer. The chemical analysis includes the sustraction of the nebular continuum and of the stellar continuum computed from simple stellar population synthesis grids. The S+2 content is measured from the near infrared [SIII]λλ9069Å, 9532Å lines, while an ICF(S3 +) is proposed based on the Ar3 +/Ar2 + fraction. Finally, we apply a multivariable linear regression using simultaneously oxygen, nitrogen and sulphur abundances for the same sample to determine the primordial helium abundance resulting in YP - O, N, S = 0.245 ± 0.007.

  19. Estimation of Canopy Sunlit Fraction of Leaf Area from Ground-Based Measurements

    NASA Astrophysics Data System (ADS)

    Yang, B.; Knyazikhin, Y.; Yan, K.; Chen, C.; Park, T.; CHOI, S.; Mottus, M.; Rautiainen, M.; Stenberg, P.; Myneni, R.; Yan, L.

    2015-12-01

    The sunlit fraction of leaf area (SFLA) defined as the fraction of the total hemisurface leaf area illuminated by the direct solar beam is a key structural variable in many global models of climate, hydrology, biogeochemistry and ecology. SFLAI is expected to be a standard product from the Earth Polychromatic Imaging Camera (EPIC) on board the joint NOAA, NASA and US Air Force Deep Space Climate Observatory (DSCOVR) mission, which was successfully launched from Cape Canaveral, Florida on February 11, 2015. The DSCOVR EPIC sensor orbiting the Sun-Earth Lagrange L1 point provides multispectral measurements of the radiation reflected by Earth in retro-illumination directions. This poster discusses a methodology for estimating the SFLA using LAI-2000 Canopy Analyzer, which is expected to underlie the strategy for validation of the DSCOVR EPIC land surface products. LAI-2000 data collected over 18 coniferous and broadleaf sites in Hyytiälä, Central Finland, were used to estimate the SFLA. Field data on canopy geometry were used to simulate selected sites. Their SFLAI was calculated using a Monte Carlo (MC) technique. LAI-2000 estimates of SFLA showed a very good agreement with MC results, suggesting validity of the proposed approach.

  20. Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and

    Science.gov Websites

    Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels

  1. Accounting for host cell protein behavior in anion-exchange chromatography.

    PubMed

    Swanson, Ryan K; Xu, Ruo; Nettleton, Daniel S; Glatz, Charles E

    2016-11-01

    Host cell proteins (HCP) are a problematic set of impurities in downstream processing (DSP) as they behave most similarly to the target protein during separation. Approaching DSP with the knowledge of HCP separation behavior would be beneficial for the production of high purity recombinant biologics. Therefore, this work was aimed at characterizing the separation behavior of complex mixtures of HCP during a commonly used method: anion-exchange chromatography (AEX). An additional goal was to evaluate the performance of a statistical methodology, based on the characterization data, as a tool for predicting protein separation behavior. Aqueous two-phase partitioning followed by two-dimensional electrophoresis provided data on the three physicochemical properties most commonly exploited during DSP for each HCP: pI (isoelectric point), molecular weight, and surface hydrophobicity. The protein separation behaviors of two alternative expression host extracts (corn germ and E. coli) were characterized. A multivariate random forest (MVRF) statistical methodology was then applied to the database of characterized proteins creating a tool for predicting the AEX behavior of a mixture of proteins. The accuracy of the MVRF method was determined by calculating a root mean squared error value for each database. This measure never exceeded a value of 0.045 (fraction of protein populating each of the multiple separation fractions) for AEX. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1453-1463, 2016. © 2016 American Institute of Chemical Engineers.

  2. Mercury tissue residue approach in Chironomus riparius: Involvement of toxicokinetics and comparison of subcellular fractionation methods.

    PubMed

    Gimbert, Frédéric; Geffard, Alain; Guédron, Stéphane; Dominik, Janusz; Ferrari, Benoit J D

    2016-02-01

    Along with the growing body of evidence that total internal concentration is not a good indicator of toxicity, the Critical Body Residue (CBR) approach recently evolved into the Tissue Residue Approach (TRA) which considers the biologically active portion of metal that is available to contribute to the toxicity at sites of toxic action. For that purpose, we examined total mercury (Hg) bioaccumulation and subcellular fractionation kinetics in fourth stage larvae of the midge Chironomus riparius during a four-day laboratory exposure to Hg-spiked sediments and water. The debris (including exoskeleton, gut contents and cellular debris), granule and organelle fractions accounted only for about 10% of the Hg taken up, whereas Hg concentrations in the entire cytosolic fraction rapidly increased to approach steady-state. Within this fraction, Hg compartmentalization to metallothionein-like proteins (MTLP) and heat-sensitive proteins (HSP), consisting mostly of enzymes, was assessed in a comparative manner by two methodologies based on heat-treatment and centrifugation (HT&C method) or size exclusion chromatography separation (SECS method). The low Hg recoveries obtained with the HT&C method prevented accurate analysis of the cytosolic Hg fractionation by this approach. According to the SECS methodology, the Hg-bound MTLP fraction increased linearly over the exposure duration and sequestered a third of the Hg flux entering the cytosol. In contrast, the HSP fraction progressively saturated leading to Hg excretion and physiological impairments. This work highlights several methodological and biological aspects to improve our understanding of Hg toxicological bioavailability in aquatic invertebrates. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions

    Science.gov Websites

    Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark

  4. A Procedure Using Calculators to Express Answers in Fractional Form.

    ERIC Educational Resources Information Center

    Carlisle, Earnest

    A procedure is described that enables students to perform operations on fractions with a calculator, expressing the answer as a fraction. Patterns using paper-and-pencil procedures for each operation with fractions are presented. A microcomputer software program illustrates how the answer can be found using integer values of the numerators and…

  5. [Optimization of Polysaccharide Extraction from Spirodela polyrrhiza by Plackett-Burman Design Combined with Box-Behnken Response Surface Methodology].

    PubMed

    Jiang, Zheng; Wang, Hong; Wu, Qi-nan

    2015-06-01

    To optimize the processing of polysaccharide extraction from Spirodela polyrrhiza. Five factors related to extraction rate of polysaccharide were optimized by the Plackett-Burman design. Based on this study, three factors, including alcohol volume fraction, extraction temperature and ratio of material to liquid, were regarded as investigation factors by Box-Behnken response surface methodology. The effect order of three factors on the extraction rate of polysaccharide from Spirodela polyrrhiza were as follows: extraction temperature, alcohol volume fraction,ratio of material to liquid. According to Box-Behnken response, the best extraction conditions were: alcohol volume fraction of 81%, ratio of material to liquid of 1:42, extraction temperature of 100 degrees C, extraction time of 60 min for four times. Plackett-Burman design and Box-Behnken response surface methodology used to optimize the extraction process for the polysaccharide in this study is effective and stable.

  6. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  7. Enhancements in Deriving Smoke Emission Coefficients from Fire Radiative Power Measurements

    NASA Technical Reports Server (NTRS)

    Ellison, Luke; Ichoku, Charles

    2011-01-01

    Smoke emissions have long been quantified after-the-fact by simple multiplication of burned area, biomass density, fraction of above-ground biomass, and burn efficiency. A new algorithm has been suggested, as described in Ichoku & Kaufman (2005), for use in calculating smoke emissions directly from fire radiative power (FRP) measurements such that the latency and uncertainty associated with the previously listed variables are avoided. Application of this new, simpler and more direct algorithm is automatic, based only on a fire's FRP measurement and a predetermined coefficient of smoke emission for a given location. Attaining accurate coefficients of smoke emission is therefore critical to the success of this algorithm. In the aforementioned paper, an initial effort was made to derive coefficients of smoke emission for different large regions of interest using calculations of smoke emission rates from MODIS FRP and aerosol optical depth (AOD) measurements. Further work had resulted in a first draft of a 1 1 resolution map of these coefficients. This poster will present the work done to refine this algorithm toward the first production of global smoke emission coefficients. Main updates in the algorithm include: 1) inclusion of wind vectors to help refine several parameters, 2) defining new methods for calculating the fire-emitted AOD fractions, and 3) calculating smoke emission rates on a per-pixel basis and aggregating to grid cells instead of doing so later on in the process. In addition to a presentation of the methodology used to derive this product, maps displaying preliminary results as well as an outline of the future application of such a product into specific research opportunities will be shown.

  8. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  9. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  10. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  11. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  12. Quantifying methane emissions from natural gas production in north-eastern Pennsylvania

    NASA Astrophysics Data System (ADS)

    Barkley, Zachary R.; Lauvaux, Thomas; Davis, Kenneth J.; Deng, Aijun; Miles, Natasha L.; Richardson, Scott J.; Cao, Yanni; Sweeney, Colm; Karion, Anna; Smith, MacKenzie; Kort, Eric A.; Schwietzke, Stefan; Murphy, Thomas; Cervone, Guido; Martins, Douglas; Maasakkers, Joannes D.

    2017-11-01

    Natural gas infrastructure releases methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated emission rate associated with the production and transportation of natural gas is uncertain, hindering our understanding of its greenhouse footprint. This study presents a new application of inverse methodology for estimating regional emission rates from natural gas production and gathering facilities in north-eastern Pennsylvania. An inventory of CH4 emissions was compiled for major sources in Pennsylvania. This inventory served as input emission data for the Weather Research and Forecasting model with chemistry enabled (WRF-Chem), and atmospheric CH4 mole fraction fields were generated at 3 km resolution. Simulated atmospheric CH4 enhancements from WRF-Chem were compared to observations obtained from a 3-week flight campaign in May 2015. Modelled enhancements from sources not associated with upstream natural gas processes were assumed constant and known and therefore removed from the optimization procedure, creating a set of observed enhancements from natural gas only. Simulated emission rates from unconventional production were then adjusted to minimize the mismatch between aircraft observations and model-simulated mole fractions for 10 flights. To evaluate the method, an aircraft mass balance calculation was performed for four flights where conditions permitted its use. Using the model optimization approach, the weighted mean emission rate from unconventional natural gas production and gathering facilities in north-eastern Pennsylvania approach is found to be 0.36 % of total gas production, with a 2σ confidence interval between 0.27 and 0.45 % of production. Similarly, the mean emission estimates using the aircraft mass balance approach are calculated to be 0.40 % of regional natural gas production, with a 2σ confidence interval between 0.08 and 0.72 % of production. These emission rates as a percent of production are lower than rates found in any other basin using a top-down methodology, and may be indicative of some characteristics of the basin that make sources from the north-eastern Marcellus region unique.

  13. DENSITY FRACTIONATION OF FOREST SOILS: METHODOLOGICAL QUESTIONS AND INTERPRETATION OF INCUBATION RESULTS AND TURNOVER TIME IN AN ECOSYSTEM CONTEXT

    EPA Science Inventory

    Soil organic matter (SOM) is often separated by physical means to simplify a complex matrix into discrete fractions. A frequent approach to isolating two or more fractions is based on differing particle densities and uses a high density liquid such as sodium polytungstate (SPT). ...

  14. 76 FR 34270 - Federal-State Extended Benefits Program-Methodology for Calculating “on” or “off” Total...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-13

    ...--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate Indicators for Purposes of Determining...'' or ``off'' total unemployment rate (TUR) indicators to determine when extended benefit (EB) periods...-State Extended Benefits Program--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate...

  15. Methodology to Estimate the Longitudinal Average Attributable Fraction of Guideline-recommended Medications for Death in Older Adults With Multiple Chronic Conditions

    PubMed Central

    Zhan, Yilei; Cohen, Andrew B.; Tinetti, Mary E.; Trentalange, Mark; McAvay, Gail

    2016-01-01

    Background: Persons with multiple chronic conditions receive multiple guideline-recommended medications to improve outcomes such as mortality. Our objective was to estimate the longitudinal average attributable fraction for 3-year survival of medications for cardiovascular conditions in persons with multiple chronic conditions and to determine whether heterogeneity occurred by age. Methods: Medicare Current Beneficiary Survey participants (N = 8,578) with two or more chronic conditions, enrolled from 2005 to 2009 with follow-up through 2011, were analyzed. We calculated the longitudinal extension of the average attributable fraction for oral medications (beta blockers, renin–angiotensin system blockers, and thiazide diuretics) indicated for cardiovascular conditions (atrial fibrillation, coronary artery disease, heart failure, and hypertension), on survival adjusted for 18 participant characteristics. Models stratified by age (≤80 and >80 years) were analyzed to determine heterogeneity of both cardiovascular conditions and medications. Results: Heart failure had the greatest average attributable fraction (39%) for mortality. The fractional contributions of beta blockers, renin–angiotensin system blockers, and thiazides to improve survival were 10.4%, 9.3%, and 7.2% respectively. In age-stratified models, of these medications thiazides had a significant contribution to survival only for those aged 80 years or younger. The effects of the remaining medications were similar in both age strata. Conclusions: Most cardiovascular medications were attributed independently to survival. The two cardiovascular conditions contributing independently to death were heart failure and atrial fibrillation. The medication effects were similar by age except for thiazides that had a significant contribution to survival in persons younger than 80 years. PMID:26748093

  16. DETERMINATION OF THE CREEP–FATIGUE INTERACTION DIAGRAM FOR ALLOY 617

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, J. K.; Carroll, L. J.; Sham, T. -L.

    Alloy 617 is the leading candidate material for an intermediate heat exchanger for the very high temperature reactor. To evaluate the behavior of this material in the expected service conditions, creep-fatigue testing was performed. Testing has been performed primarily on a single heat of material at 850 and 950°C for total strain ranges of 0.3 to 1% and tensile hold times as long as 240 minutes. At 850°C, increases in the tensile hold duration degraded the creep fatigue resistance, at least to the investigated strain-controlled hold time of up to 60 minutes at the 0.3% strain range and 240 minutesmore » at the 1.0% strain range. At 950°C, the creep-fatigue cycles to failure becomes constant with increasing hold times, indicating saturation occurs at relatively short hold times. The creep and fatigue damage fractions have been calculated and plotted on a creep-fatigue interaction D-diagram. Results from earlier creep-fatigue tests at 800 and 1000°C on an additional heat of Alloy 617 are also plotted on the D-diagram. The methodology for calculating the damage fractions will be presented, and the effects of strain rate, strain range, temperature, hold time, and strain profile (i.e. holds in tension, compression or both) on the creep-fatigue damage will be explored.« less

  17. Density fractionation of forest soils: methodological questions and interpretation of incubation results and turnover time in an ecosystem context

    Treesearch

    Susan E. Crow; Christopher W. Swanston; Kate Lajtha; J. Renee Brooks; Heath Keirstead

    2007-01-01

    Soil organic matter (SOM) is often separated by physical means to simplify a complex matrix into discrete fractions. A frequent approach to isolating two or more fractions is based on differing particle densities and uses a high density liquid such as sodium polytungstate (SPT). Soil density fractions are often interpreted as organic matter pools with different carbon...

  18. Integrated HPTLC-based Methodology for the Tracing of Bioactive Compounds in Herbal Extracts Employing Multivariate Chemometrics. A Case Study on Morus alba.

    PubMed

    Chaita, Eliza; Gikas, Evagelos; Aligiannis, Nektarios

    2017-03-01

    In drug discovery, bioassay-guided isolation is a well-established procedure, and still the basic approach for the discovery of natural products with desired biological properties. However, in these procedures, the most laborious and time-consuming step is the isolation of the bioactive constituents. A prior identification of the compounds that contribute to the demonstrated activity of the fractions would enable the selection of proper chromatographic techniques and lead to targeted isolation. The development of an integrated HPTLC-based methodology for the rapid tracing of the bioactive compounds during bioassay-guided processes, using multivariate statistics. Materials and Methods - The methanol extract of Morus alba was fractionated employing CPC. Subsequently, fractions were assayed for tyrosinase inhibition and analyzed with HPTLC. PLS-R algorithm was performed in order to correlate the analytical data with the biological response of the fractions and identify the compounds with the highest contribution. Two methodologies were developed for the generation of the dataset; one based on manual peak picking and the second based on chromatogram binning. Results and Discussion - Both methodologies afforded comparable results and were able to trace the bioactive constituents (e.g. oxyresveratrol, trans-dihydromorin, 2,4,3'-trihydroxydihydrostilbene). The suggested compounds were compared in terms of R f values and UV spectra with compounds isolated from M. alba using typical bioassay-guided process. Chemometric tools supported the development of a novel HPTLC-based methodology for the tracing of tyrosinase inhibitors in M. alba extract. All steps of the experimental procedure implemented techniques that afford essential key elements for application in high-throughput screening procedures for drug discovery purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Development of a BALB/c 3T3 neutral red uptake cytotoxicity test using a mainstream cigarette smoke exposure system

    PubMed Central

    2014-01-01

    Background Tobacco smoke toxicity has traditionally been assessed using the particulate fraction under submerged culture conditions which omits the vapour phase elements from any subsequent analysis. Therefore, methodologies that assess the full interactions and complexities of tobacco smoke are required. Here we describe the adaption of a modified BALB/c 3T3 neutral red uptake (NRU) cytotoxicity test methodology, which is based on the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) protocol for in vitro acute toxicity testing. The methodology described takes into account the synergies of both the particulate and vapour phase of tobacco smoke. This is of particular importance as both phases have been independently shown to induce in vitro cellular cytotoxicity. Findings The findings from this study indicate that mainstream tobacco smoke and the gas vapour phase (GVP), generated using the Vitrocell® VC 10 smoke exposure system, have distinct and significantly different toxicity profiles. Within the system tested, mainstream tobacco smoke produced a dilution IC50 (dilution (L/min) at which 50% cytotoxicity is observed) of 6.02 L/min, whereas the GVP produced a dilution IC50 of 3.20 L/min. In addition, we also demonstrated significant dose-for-dose differences between mainstream cigarette smoke and the GVP fraction (P < 0.05). This demonstrates the importance of testing the entire tobacco smoke aerosol and not just the particulate fraction, as has been the historical preference. Conclusions We have adapted the NRU methodology based on the ICCVAM protocol to capture the full interactions and complexities of tobacco smoke. This methodology could also be used to assess the performance of traditional cigarettes, blend and filter technologies, tobacco smoke fractions and individual test aerosols. PMID:24935030

  20. Population Attributable and Preventable Fractions: Cancer Risk Factor Surveillance, and Cancer Policy Projection

    PubMed Central

    Shield, Kevin D.; Parkin, D. Maxwell; Whiteman, David C.; Rehm, Jürgen; Viallon, Vivian; Micallef, Claire Marant; Vineis, Paolo; Rushton, Lesley; Bray, Freddie; Soerjomataram, Isabelle

    2016-01-01

    The proportions of new cancer cases and deaths that are caused by exposure to risk factors and that could be prevented are key statistics for public health policy and planning. This paper summarizes the methodologies for estimating, challenges in the analysis of, and utility of, population attributable and preventable fractions for cancers caused by major risk factors such as tobacco smoking, dietary factors, high body fat, physical inactivity, alcohol consumption, infectious agents, occupational exposure, air pollution, sun exposure, and insufficient breastfeeding. For population attributable and preventable fractions, evidence of a causal relationship between a risk factor and cancer, outcome (such as incidence and mortality), exposure distribution, relative risk, theoretical-minimum-risk, and counterfactual scenarios need to be clearly defined and congruent. Despite limitations of the methodology and the data used for estimations, the population attributable and preventable fractions are a useful tool for public health policy and planning. PMID:27547696

  1. Determination of element affinities by density fractionation of bulk coal samples

    USGS Publications Warehouse

    Querol, X.; Klika, Z.; Weiss, Z.; Finkelman, R.B.; Alastuey, A.; Juan, R.; Lopez-Soler, A.; Plana, F.; Kolker, A.; Chenery, S.R.N.

    2001-01-01

    A review has been made of the various methods of determining major and trace element affinities for different phases, both mineral and organic in coals, citing their various strengths and weaknesses. These include mathematical deconvolution of chemical analyses, direct microanalysis, sequential extraction procedures and density fractionation. A new methodology combining density fractionation with mathematical deconvolution of chemical analyses of whole coals and their density fractions has been evaluated. These coals formed part of the IEA-Coal Research project on the Modes of Occurrence of Trace Elements in Coal. Results were compared to a previously reported sequential extraction methodology and showed good agreement for most elements. For particular elements (Be, Mo, Cu, Se and REEs) in specific coals where disagreement was found, it was concluded that the occurrence of rare trace element bearing phases may account for the discrepancy, and modifications to the general procedure must be made to account for these.

  2. Fuel Fraction Analysis of 500 MWth Gas Cooled Fast Reactor with Nitride (UN-PuN) Fuel without Refueling

    NASA Astrophysics Data System (ADS)

    Dewi Syarifah, Ratna; Su'ud, Zaki; Basar, Khairul; Irwanto, Dwi

    2017-01-01

    Nuclear Power Plant (NPP) is one of candidates which can support electricity demand in the world. The Generation IV NPP has fourth main objective, i.e. sustainability, economics competitiveness, safety and reliability, and proliferation and physical protection. One of Gen-IV reactor type is Gas Cooled Fast Reactor (GFR). In this study, the analysis of fuel fraction in small GFR with nitride fuel has been done. The calculation was performed by SRAC code, both Pij and CITATION calculation. SRAC2002 system is a code system applicable to analyze the neutronics of variety reactor type. And for the data library used JENDL-3.2. The step of SRAC calculation is fuel pin calculated by Pij calculation until the data homogenized, after it homogenized we calculate core reactor. The variation of fuel fraction is 40% up to 65%. The optimum design of 500MWth GFR without refueling with 10 years burn up time reach when radius F1:F2:F3 = 50cm:30cm:30cm and height F1:F2:F3 = 50cm:40cm:30cm, variation percentage Plutonium in F1:F2:F3 = 7%:10%:13%. The optimum fuel fraction is 41% with addition 2% Plutonium weapon grade mix in the fuel. The excess reactivity value in this case 1.848% and the k-eff value is 1.01883. The high burn up reached when the fuel fraction is low. In this study 41% fuel fraction produce faster fissile fuel, so it has highest burn-up level than the other fuel fraction.

  3. Optimising mobile phase composition, its flow-rate and column temperature in HPLC using taboo search.

    PubMed

    Guillaume, Y C; Peyrin, E

    2000-03-06

    A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.

  4. Estimation of number of fatalities caused by toxic gases due to fire in road tunnels.

    PubMed

    Qu, Xiaobo; Meng, Qiang; Liu, Zhiyuan

    2013-01-01

    The quantitative risk assessment (QRA) is one of the explicit requirements under the European Union (EU) Directive (2004/54/EC). As part of this, it is essential to be able to estimate the number of fatalities in different accident scenarios. In this paper, a tangible methodology is developed to estimate the number of fatalities caused by toxic gases due to fire in road tunnels by incorporating traffic flow and the spread of fire in tunnels. First, a deterministic queuing model is proposed to calculate the number of people at risk, by taking into account tunnel geometry, traffic flow patterns, and incident response plans for road tunnels. Second, the Fire Dynamics Simulator (FDS) is used to obtain the temperature and concentrations of CO, CO(2), and O(2). By taking advantage of the additivity of the fractional effective dose (FED) method, fatality rates for different locations in given time periods can be estimated. An illustrative case study is carried out to demonstrate the applicability of the proposed methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Enhanced diesel fuel fraction from waste high-density polyethylene and heavy gas oil pyrolysis using factorial design methodology.

    PubMed

    Joppert, Ney; da Silva, Alexsandro Araujo; da Costa Marques, Mônica Regina

    2015-02-01

    Factorial Design Methodology (FDM) was developed to enhance diesel fuel fraction (C9-C23) from waste high-density polyethylene (HDPE) and Heavy Gas Oil (HGO) through co-pyrolysis. FDM was used for optimization of the following reaction parameters: temperature, catalyst and HDPE amounts. The HGO amount was constant (2.00 g) in all experiments. The model optimum conditions were determined to be temperature of 550 °C, HDPE = 0.20 g and no FCC catalyst. Under such conditions, 94% of pyrolytic oil was recovered, of which diesel fuel fraction was 93% (87% diesel fuel fraction yield), no residue was produced and 6% of noncondensable gaseous/volatile fraction was obtained. Seeking to reduce the cost due to high process temperatures, the impact of using higher catalyst content (25%) with a lower temperature (500 °C) was investigated. Under these conditions, 88% of pyrolytic oil was recovered (diesel fuel fraction yield was also 87%) as well as 12% of the noncondensable gaseous/volatile fraction. No waste was produced in these conditions, being an environmentally friendly approach for recycling the waste plastic. This paper demonstrated the usefulness of using FDM to predict and to optimize diesel fuel fraction yield with a great reduction in the number of experiments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The ALHAMBRA survey: accurate merger fractions derived by PDF analysis of photometrically close pairs

    NASA Astrophysics Data System (ADS)

    López-Sanjuan, C.; Cenarro, A. J.; Varela, J.; Viironen, K.; Molino, A.; Benítez, N.; Arnalte-Mur, P.; Ascaso, B.; Díaz-García, L. A.; Fernández-Soto, A.; Jiménez-Teja, Y.; Márquez, I.; Masegosa, J.; Moles, M.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Martínez, V. J.; Perea, J.; Prada, F.; Quintana, J. M.

    2015-04-01

    Aims: Our goal is to develop and test a novel methodology to compute accurate close-pair fractions with photometric redshifts. Methods: We improved the currently used methodologies to estimate the merger fraction fm from photometric redshifts by (i) using the full probability distribution functions (PDFs) of the sources in redshift space; (ii) including the variation in the luminosity of the sources with z in both the sample selection and the luminosity ratio constrain; and (iii) splitting individual PDFs into red and blue spectral templates to reliably work with colour selections. We tested the performance of our new methodology with the PDFs provided by the ALHAMBRA photometric survey. Results: The merger fractions and rates from the ALHAMBRA survey agree excellently well with those from spectroscopic work for both the general population and red and blue galaxies. With the merger rate of bright (MB ≤ -20-1.1z) galaxies evolving as (1 + z)n, the power-law index n is higher for blue galaxies (n = 2.7 ± 0.5) than for red galaxies (n = 1.3 ± 0.4), confirming previous results. Integrating the merger rate over cosmic time, we find that the average number of mergers per galaxy since z = 1 is Nmred = 0.57 ± 0.05 for red galaxies and Nmblue = 0.26 ± 0.02 for blue galaxies. Conclusions: Our new methodology statistically exploits all the available information provided by photometric redshift codes and yields accurate measurements of the merger fraction by close pairs from using photometric redshifts alone. Current and future photometric surveys will benefit from this new methodology. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).The catalogues, probabilities, and figures of the ALHAMBRA close pairs detected in Sect. 5.1 are available at http://https://cloud.iaa.csic.es/alhambra/catalogues/ClosePairs

  7. Radioactive waste disposal fees-Methodology for calculation

    NASA Astrophysics Data System (ADS)

    Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich

    2014-11-01

    This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.

  8. 42 CFR 484.230 - Methodology used for the calculation of the low-utilization payment adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Methodology used for the calculation of the low... Prospective Payment System for Home Health Agencies § 484.230 Methodology used for the calculation of the low... amount is determined by using cost data set forth in § 484.210(a) and adjusting by the appropriate wage...

  9. Ab initio crystal structure prediction of magnesium (poly)sulfides and calculation of their NMR parameters.

    PubMed

    Mali, Gregor

    2017-03-01

    Ab initio prediction of sensible crystal structures can be regarded as a crucial task in the quickly-developing methodology of NMR crystallography. In this contribution, an evolutionary algorithm was used for the prediction of magnesium (poly)sulfide crystal structures with various compositions. The employed approach successfully identified all three experimentally detected forms of MgS, i.e. the stable rocksalt form and the metastable wurtzite and zincblende forms. Among magnesium polysulfides with a higher content of sulfur, the most probable structure with the lowest formation energy was found to be MgS 2 , exhibiting a modified rocksalt structure, in which S 2- anions were replaced by S 2 2- dianions. Magnesium polysulfides with even larger fractions of sulfur were not predicted to be stable. For the lowest-energy structures, 25 Mg quadrupolar coupling constants and chemical shift parameters were calculated using the density functional theory approach. The calculated NMR parameters could be well rationalized by the symmetries of the local magnesium environments, by the coordination of magnesium cations and by the nature of the surrounding anions. In the future, these parameters could serve as a reference for the experimentally determined 25 Mg NMR parameters of magnesium sulfide species.

  10. The Oil Drop Experiment: An Illustration of Scientific Research Methodology and its Implications for Physics Textbooks

    ERIC Educational Resources Information Center

    Rodriguez, Maria A.; Niaz, Mansoor

    2004-01-01

    The objectives of this study are: (1) evaluation of the methodology used in recent search for particles with fractional electrical charge (quarks) and its implications for understanding the scientific research methodology of Millikan; (2) evaluation of 43 general physics textbooks and 11 laboratory manuals, with respect to the oil drop experiment,…

  11. Comparing Methodologies for Evaluating Emergency Medical Services Ground Transport Access to Time-critical Emergency Services: A Case Study Using Trauma Center Care.

    PubMed

    Doumouras, Aristithes G; Gomez, David; Haas, Barbara; Boyes, Donald M; Nathens, Avery B

    2012-09-01

    The regionalization of medical services has resulted in improved outcomes and greater compliance with existing guidelines. For certain "time-critical" conditions intimately associated with emergency medicine, early intervention has demonstrated mortality benefits. For these conditions, then, appropriate triage within a regionalized system at first diagnosis is paramount, ideally occurring in the field by emergency medical services (EMS) personnel. Therefore, EMS ground transport access is an important metric in the ongoing evaluation of a regionalized care system for time-critical emergency services. To our knowledge, no studies have demonstrated how methodologies for calculating EMS ground transport access differ in their estimates of access over the same study area for the same resource. This study uses two methodologies to calculate EMS ground transport access to trauma center care in a single study area to explore their manifestations and critically evaluate the differences between the methodologies. Two methodologies were compared in their estimations of EMS ground transport access to trauma center care: a routing methodology (RM) and an as-the-crow-flies methodology (ACFM). These methodologies were adaptations of the only two methodologies that had been previously used in the literature to calculate EMS ground transport access to time-critical emergency services across the United States. The RM and ACFM were applied to the nine Level I and Level II trauma centers within the province of Ontario by creating trauma center catchment areas at 30, 45, 60, and 120 minutes and calculating the population and area encompassed by the catchments. Because the methodologies were identical for measuring air access, this study looks specifically at EMS ground transport access. Catchments for the province were created for each methodology at each time interval, and their populations and areas were significantly different at all time periods. Specifically, the RM calculated significantly larger populations at every time interval while the ACFM calculated larger catchment area sizes. This trend is counterintuitive (i.e., larger catchment should mean higher populations), and it was found to be most disparate at the shortest time intervals (under 60 minutes). Through critical evaluation of the differences, the authors elucidated that the ACFM could calculate road access in areas with no roads and overestimates access in low-density areas compared to the RM, potentially affecting delivery of care decisions. Based on these results, the authors believe that future methodologies for calculating EMS ground transport access must incorporate a continuous and valid route through the road network as well as use travel speeds appropriate to the road segments traveled; alternatively, we feel that variation in methods for calculating road distances would have little effect on realized access. Overall, as more complex models for calculating EMS ground transport access become used, there needs to be a standard methodology to improve and to compare it to. Based on these findings, the authors believe that this should be the RM. © 2012 by the Society for Academic Emergency Medicine.

  12. Optimization of antioxidant activity by response surface methodology in hydrolysates of jellyfish (Rhopilema esculentum) umbrella collagen.

    PubMed

    Zhuang, Yong-liang; Zhao, Xue; Li, Ba-fang

    2009-08-01

    To optimize the hydrolysis conditions to prepare hydrolysates of jellyfish umbrella collagen with the highest hydroxyl radical scavenging activity, collagen extracted from jellyfish umbrella was hydrolyzed with trypsin, and response surface methodology (RSM) was applied. The optimum conditions obtained from experiments were pH 7.75, temperature (T) 48.77 degrees C, and enzyme-to-substrate ratio ([E]/[S]) 3.50%. The analysis of variance in RSM showed that pH and [E]/[S] were important factors that significantly affected the process (P<0.05 and P<0.01, respectively). The hydrolysates of jellyfish umbrella collagen were fractionated by high performance liquid chromatography (HPLC), and three fractions (HF-1>3000 Da, 1000 Da

  13. Optimization of antioxidant activity by response surface methodology in hydrolysates of jellyfish (Rhopilema esculentum) umbrella collagen*

    PubMed Central

    Zhuang, Yong-liang; Zhao, Xue; Li, Ba-fang

    2009-01-01

    To optimize the hydrolysis conditions to prepare hydrolysates of jellyfish umbrella collagen with the highest hydroxyl radical scavenging activity, collagen extracted from jellyfish umbrella was hydrolyzed with trypsin, and response surface methodology (RSM) was applied. The optimum conditions obtained from experiments were pH 7.75, temperature (T) 48.77 °C, and enzyme-to-substrate ratio ([E]/[S]) 3.50%. The analysis of variance in RSM showed that pH and [E]/[S] were important factors that significantly affected the process (P<0.05 and P<0.01, respectively). The hydrolysates of jellyfish umbrella collagen were fractionated by high performance liquid chromatography (HPLC), and three fractions (HF-1>3000 Da, 1000 Da

  14. 76 FR 59896 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Postponement of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Wage Rule revised the methodology by which we calculate the prevailing wages to be paid to H-2B workers... methodology by which we calculate the prevailing wages to be paid to H-2B workers and United States (U.S... concerning the calculation of the prevailing wage rate in the H-2B program. CATA v. Solis, Dkt. No. 103-1...

  15. New methodology for fast prediction of wheel wear evolution

    NASA Astrophysics Data System (ADS)

    Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.

    2017-07-01

    In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.

  16. Effects of Intervention to Improve At-Risk Fourth Graders' Understanding, Calculations, and Word Problems with Fractions

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; Schumacher, Robin F.; Long, Jessica; Namkung, Jessica; Malone, Amelia S.; Wang, Amber; Hamlett, Carol L.; Jordan, Nancy C.; Siegler, Robert S.; Changas, Paul

    2016-01-01

    The purposes of this study were to (a) investigate the efficacy of a core fraction intervention program on understanding and calculation skill and (b) isolate the effects of different forms of fraction word-problem (WP) intervention delivered as part of the larger program. At-risk 4th graders (n = 213) were randomly assigned at the individual…

  17. Effects of Intervention to Improve At-Risk Fourth Graders' Understanding, Calculations, and Word Problems with Fractions

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; Schumacher, Robin F.; Long, Jessica; Namkung, Jessica; Malone, Amelia S.; Wang, Amber; Hamlett, Carol L.; Jordan, Nancy C.; Siegler, Robert S.; Changas, Paul

    2016-01-01

    The purposes of this study were to (a) investigate the efficacy of a core fraction intervention program on understanding and calculation skill and (b) isolate the effects of different forms of fraction word-problem (WP) intervention. At-risk fourth graders (n = 213) were randomly assigned to the school's business-as-usual program, or one of two…

  18. Measurement of the spatially distributed temperature and soot loadings in a laminar diffusion flame using a Cone-Beam Tomography technique

    NASA Astrophysics Data System (ADS)

    Zhao, Huayong; Williams, Ben; Stone, Richard

    2014-01-01

    A new low-cost optical diagnostic technique, called Cone Beam Tomographic Three Colour Spectrometry (CBT-TCS), has been developed to measure the planar distributions of temperature, soot particle size, and soot volume fraction in a co-flow axi-symmetric laminar diffusion flame. The image of a flame is recorded by a colour camera, and then by using colour interpolation and applying a cone beam tomography algorithm, a colour map can be reconstructed that corresponds to a diametral plane. Look-up tables calculated using Planck's law and different scattering models are then employed to deduce the temperature, approximate average soot particle size and soot volume fraction in each voxel (volumetric pixel). A sensitivity analysis of the look-up tables shows that the results have a high temperature resolution but a relatively low soot particle size resolution. The assumptions underlying the technique are discussed in detail. Sample data from an ethylene laminar diffusion flame are compared with data in the literature for similar flames. The comparison shows very consistent temperature and soot volume fraction profiles. Further analysis indicates that the difference seen in comparison with published results are within the measurement uncertainties. This methodology is ready to be applied to measure 3D data by capturing multiple flame images from different angles for non-axisymmetric flame.

  19. On the singular perturbations for fractional differential equation.

    PubMed

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.

  20. [Elimination of toxic compounds, biological evaluation and partial characterization of the protein from jojoba meal (Simmondsia chinensis [Link] Schneider].

    PubMed

    Medina Juárez, L A; Trejo González, A

    1989-12-01

    The purpose of this study was to establish a new methodology to remove the toxic compounds present in jojoba meal and flour. Also, to perform the biological evaluation of the detoxified products and to chemically characterize the protein fractions. Jojoba meal and seed without testa were deffated with hexane and detoxified with a 7:3 isopropanol-water mixture which removed 86% of total phenolic compounds and 100% of simmondsins originally present, the resulting products had reduced bitterness and caused no deaths on experimental animals. NPR values obtained for diets containing such products were significantly different from those obtained with the casein control (p less than 0.05). Total protein was made up of three different fractions: the water-soluble fraction was the most abundant (61.8%), followed by the salt-soluble (23.6%), and the alkaline soluble fraction (14.6%). The nitrogen solubility curves showed that the isoelectric point for the water-soluble and salt-soluble fractions was pH 3.0, while that of the alkaline fraction fell in the range of 4.5-5.0. All fractions had a maximum solubility at pH 7.0. The methodology reported here, offers a viable solution to eliminate toxic compounds from jojoba meal or seeds, and upgrades the potential use of products such as animal feed or raw material for the production of protein isolates.

  1. Qualitative and Quantitative Analysis of Proteome and Peptidome of Human Follicular Fluid Using Multiple Samples from Single Donor with LC-MS and SWATH Methodology.

    PubMed

    Lewandowska, Aleksandra E; Macur, Katarzyna; Czaplewska, Paulina; Liss, Joanna; Łukaszuk, Krzysztof; Ołdziej, Stanisław

    2017-08-04

    Human follicular fluid (hFF) is a natural environment of oocyte maturation, and some components of hFF could be used to judge oocyte capability for fertilization and further development. In our pilot small-scale study three samples from four donors (12 samples in total) were analyzed to determine which hFF proteins/peptides could be used to differentiate individual oocytes and which are patient-specific. Ultrafiltration was used to fractionate hFF to high-molecular-weight (HMW) proteome (>10 kDa) and low-molecular-weight (LMW) peptidome (<10 kDa) fractions. HMW and LMW compositions were analyzed using LC-MS in SWATH data acquisition and processing methodology. In total we were able to identify 158 proteins, from which 59 were never reported before as hFF components. 55 (45 not reported before) proteins were found by analyzing LMW fraction, 67 (14 not reported before) were found by analyzing HMW fraction, and 36 were identified in both fractions of hFF. We were able to perform quantitative analysis for 72 proteins from HMW fraction of hFF. We found that concentrations of 11 proteins varied substantially among hFF samples from single donors, and those proteins are promising targets to identify biomarkers useful in oocyte quality assessment.

  2. Fuel and Carbon Dioxide Emissions Savings Calculation Methodology for Combined Heat and Power Systems

    EPA Pesticide Factsheets

    This paper provides the EPA Combined Heat and Power Partnership's recommended methodology for calculating fuel and carbon dioxide emissions savings from CHP compared to SHP, which serves as the basis for the EPA's CHP emissions calculator.

  3. Interpretation of open system petrogenetic processes: Phase equilibria constraints on magma evolution

    NASA Astrophysics Data System (ADS)

    Defant, Marc J.; Nielsen, Roger L.

    1990-01-01

    We have used a computer model (TRACES) to simulate low pressure differentiation of natural basaltic magmas in an attempt to investigate the chemical dynamics of open system magmatic processes. Our results, in the form of simulated liquid lines of descent and the calculated equilibrium mineralogy, were determined for perfect fractional crystallization; fractionation paired with recharge and eruption (PRF); fractionation paired with assimilation (AFC); and fractionation paired with recharge, eruption, and assimilation (FEAR). These simulations were calculated in an attempt to assess the effects of combinations of petrogenetic processes on major and trace element evolution of natural systems and to test techniques that have been used to decipher the relative roles of these processes. If the results of PRF calculations are interpreted in terms of a mass balance based fractionation model (e.g., Bryan et al., 1969), it is possible to generate low residuals even if one assumes that fractional crystallization was the only active process. In effect, the chemical consequences of recharge are invisible to mass balance models. Pearce element ratio analyses, however, can effectively discern the effects of PRF versus simple fractionation. The fractionating mineral proportions, and therefore, bulk distribution coefficients ( D¯) of a differentiating system are dependent on the recharge or assimilation rate. Comparison of the results of simulations assuming constant D¯ with the results calculated by TRACES show that the steady state liquid concentrations of some elements can differ by a factor of 2 to 5. If the PRF simulation is periodic, with episodes of mixing separated by intervals of fractionation, parallel liquidus mineral control lines are produced. Most of these control lines do not project back to the parental composition. This must be an important consideration when attempting to calculate a potential parental magma for any natural suite where magma chamber recharge has occurred. Most basaltic magmas cannot evolve to high silica compositions without magnetite fractionation. Small amounts of rhyolite assimilation (assimilation/fractionation < 0.1), however, can drive evolving basalts to more silica rich compositions. If mass balance models are used to interpret these synthetic AFC data, low residuals are obtained if magnetite is added to the crystallizing assemblage. This approach works even for cases where magnetite was not a fractionating phase. Thus, the mass balance results are mathematically correct, but are geologically irrelevant.

  4. Prediction of Microstructure in HAZ of Welds

    NASA Astrophysics Data System (ADS)

    Khurana, S. P.; Yancey, R.; Jung, G.

    2004-06-01

    A modeling technique for predicting microstructure in the heat-affected zone (HAZ) of the hypoeutectoid steels is presented. This technique aims at predicting the phase fractions of ferrite, pearlite, bainite and martensite present in the HAZ after the cool down of a weld. The austenite formation kinetics and austenite decomposition kinetics are calculated using the transient temperature profile. The thermal profile in the weld and the HAZ is calculated by finite-element analysis (FEA). Two kinds of austenite decomposition models are included. The final phase fractions are predicted with the help of a continuous cooling transformation (CCT) diagram of the material. In the calculation of phase fractions either the experimental CCT diagram or the mathematically calculated CCT diagram can be used.

  5. Calculation of the radiative properties of photosynthetic microorganisms

    NASA Astrophysics Data System (ADS)

    Dauchet, Jérémi; Blanco, Stéphane; Cornet, Jean-François; Fournier, Richard

    2015-08-01

    A generic methodological chain for the predictive calculation of the light-scattering and absorption properties of photosynthetic microorganisms within the visible spectrum is presented here. This methodology has been developed in order to provide the radiative properties needed for the analysis of radiative transfer within photobioreactor processes, with a view to enable their optimization for large-scale sustainable production of chemicals for energy and chemistry. It gathers an electromagnetic model of light-particle interaction along with detailed and validated protocols for the determination of input parameters: morphological and structural characteristics of the studied microorganisms as well as their photosynthetic-pigment content. The microorganisms are described as homogeneous equivalent-particles whose shape and size distribution is characterized by image analysis. The imaginary part of their refractive index is obtained thanks to a new and quite extended database of the in vivo absorption spectra of photosynthetic pigments (that is made available to the reader). The real part of the refractive index is then calculated by using the singly subtractive Kramers-Krönig approximation, for which the anchor point is determined with the Bruggeman mixing rule, based on the volume fraction of the microorganism internal-structures and their refractive indices (extracted from a database). Afterwards, the radiative properties are estimated using the Schiff approximation for spheroidal or cylindrical particles, as a first step toward the description of the complexity and diversity of the shapes encountered within the microbial world. Finally, these predictive results are confronted to experimental normal-hemispherical transmittance spectra for validation. This entire procedure is implemented for Rhodospirillum rubrum, Arthrospira platensis and Chlamydomonas reinhardtii, each representative of the main three kinds of photosynthetic microorganisms, i.e. respectively photosynthetic bacteria, cyanobacteria and eukaryotic microalgae. The obtained results are in very good agreement with the experimental measurements when the shape of the microorganisms is well described (in comparison to the standard volume-equivalent sphere approximation). As a main perspective, the consideration of the helical shape of Arthrospira platensis appears to be a key to an accurate estimation of its radiative properties. On the whole, the presented methodological chain also appears of great interest for other scientific communities such as atmospheric science, oceanography, astrophysics and engineering.

  6. Calcium isotope fractionation between aqueous compounds relevant to low-temperature geochemistry, biology and medicine

    NASA Astrophysics Data System (ADS)

    Moynier, Frédéric; Fujii, Toshiyuki

    2017-03-01

    Stable Ca isotopes are fractionated between bones, urine and blood of animals and between soils, roots and leaves of plants by >1000 ppm for the 44Ca/40Ca ratio. These isotopic variations have important implications to understand Ca transport and fluxes in living organisms; however, the mechanisms of isotopic fractionation are unclear. Here we present ab initio calculations for the isotopic fractionation between various aqueous species of Ca and show that this fractionation can be up to 3000 ppm. We show that the Ca isotopic fractionation between soil solutions and plant roots can be explained by the difference of isotopic fractionation between the different first shell hydration degree of Ca2+ and that the isotopic fractionation between roots and leaves is controlled by the precipitation of Ca-oxalates. The isotopic fractionation between blood and urine is due to the complexation of heavy Ca with citrate and oxalates in urine. Calculations are presented for additional Ca species that may be useful to interpret future Ca isotopic measurements.

  7. Optical and Transport Properties of Organic Molecules: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Strubbe, David Alan

    Organic molecules are versatile and tunable building blocks for technology, in nanoscale and bulk devices. In this dissertation, I will consider some important applications for organic molecules involving optical and transport properties, and develop methods and software appropriate for theoretical calculations of these properties. Specifically, we will consider second-harmonic generation, a nonlinear optical process; photoisomerization, in which absorption of light leads to mechanical motion; charge transport in junctions formed of single molecules; and optical excitations in pentacene, an organic semiconductor with applications in photovoltaics, optoelectronics, and flexible electronics. In the Introduction (Chapter 1), I will give an overview of some phenomenology about organic molecules and these application areas, and discuss the basics of the theoretical methodology I will use: density-functional theory (DFT), time-dependent density-functional theory (TDDFT), and many-body perturbation theory based on the GW approximation. In the subsequent chapters, I will further discuss, develop, and apply this methodology. 2. I will give a pedagogical derivation of the methods for calculating response properties in TDDFT, with particular focus on the Sternheimer equation, as will be used in subsequent chapters. I will review the many different response properties that can be calculated (dynamic and static) and the appropriate perturbations used to calculate them. 3. Standard techniques for calculating response use either integer occupations (as appropriate for a system with an energy gap) or fractional occupations due to a smearing function, used to improve convergence for metallic systems. I will present a generalization which can be used to compute response for a system with arbitrary fractional occupations. 4. Chloroform (CHCl3) is a small molecule commonly used as a solvent in measurements of nonlinear optics. I computed its hyperpolarizability for second-harmonic generation with TDDFT with a real-space grid, finding good agreement with calculations using localized bases and with experimental measurements, and that the response is very long-ranged in space. 5. N C 60 is an endohedral fullerene, a sphere of carbon containing a single N atom inside, which is weakly coupled electronically. I show with TDDFT calculations that a laser pulse can excite the vibrational mode of this N atom, transiently turning on and off the system's ability to undergo second-harmonic generation. The calculated susceptibility is as large as some commercially used frequency-doubling materials. 6. A crucial question in understanding experimental measurements of nonlinear optics and their relation to device performance is the effect of the solution environment on the properties of the isolated molecules. I will consider possible explanations for the large enhancement of the hyperpolarizability of chloroform in solution, demonstrate an ab initio method of calculating electrostatic effects with local-field factors, and derive the equations necessary for a full calculation of liquid chloroform. 7. Many-body perturbation theory, in the GW approximation for quasiparticle band-structure and Bethe-Salpeter equation for optical properties, is a powerful method for calculations in solids, nanostructures, and molecules. The BerkeleyGW code is a freely available implementation of this methodology which has been extensively tested and efficiently parallelized for use on large systems. 8. Molecular junctions, in which a single molecule is contacted to two metallic leads, are interesting systems for studying nanoscale transport. I will present a method called DFT+Sigma which approximates many-body perturbation theory to enable accurate and efficient calculations of the conductance of these systems. 9. Azobenzene is a molecule with the unusual property that it can switch reversible between two different geometries, cis and trans, upon absorption of light. I have calculated the structures of these two forms when absorbed on the Au(111) surface, to understand scanning tunneling microscope studies and elucidate the switching mechanism on the surface. I have also calculated the conductance of the two forms in a molecular junction. 10. The Seebeck and Peltier thermoelectric effects can interconvert electricity and heat, and are parametrized by the Seebeck coefficient. Standard methods in quantum transport for computing this quantity are problematic numerically. I will show this fact in a simple model and derive a more robust and efficient approach. 11. Pentacene is an organic semiconductor which shows exciton self-trapping in its optical spectra. I will present a method for calculation of excited-state forces with the Bethe-Salpeter equation that can be applied to study the geometrical relaxation that occurs upon absorption of light by pentacene.

  8. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...

  9. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...

  10. Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Casper, Jay H.

    2005-01-01

    The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.

  11. Advanced Space Propulsion System Flowfield Modeling

    NASA Technical Reports Server (NTRS)

    Smith, Sheldon

    1998-01-01

    Solar thermal upper stage propulsion systems currently under development utilize small low chamber pressure/high area ratio nozzles. Consequently, the resulting flow in the nozzle is highly viscous, with the boundary layer flow comprising a significant fraction of the total nozzle flow area. Conventional uncoupled flow methods which treat the nozzle boundary layer and inviscid flowfield separately by combining the two calculations via the influence of the boundary layer displacement thickness on the inviscid flowfield are not accurate enough to adequately treat highly viscous nozzles. Navier Stokes models such as VNAP2 can treat these flowfields but cannot perform a vacuum plume expansion for applications where the exhaust plume produces induced environments on adjacent structures. This study is built upon recently developed artificial intelligence methods and user interface methodologies to couple the VNAP2 model for treating viscous nozzle flowfields with a vacuum plume flowfield model (RAMP2) that is currently a part of the Plume Environment Prediction (PEP) Model. This study integrated the VNAP2 code into the PEP model to produce an accurate, practical and user friendly tool for calculating highly viscous nozzle and exhaust plume flowfields.

  12. Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stosic, Z.; Preusche, G.

    1996-08-01

    In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less

  13. 76 FR 68385 - Approval and Promulgation of Implementation Plans; New Mexico; Albuquerque/Bernalillo County...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-04

    ... NMAC) addition of in subsections methodology for (A) and (B). fugitive dust control permits, revised... fee Fee Calculations and requirements for Procedures. fugitive dust control permits. 9/7/2004 Section... schedule based on acreage, add and update calculation methodology used to calculate non- programmatic dust...

  14. On the Singular Perturbations for Fractional Differential Equation

    PubMed Central

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357

  15. Cod Fractions In Mechanical-Biological Wastewater Treatment Plant

    NASA Astrophysics Data System (ADS)

    Płuciennik-Koropczuk, Ewelina; Jakubaszek, Anita; Myszograj, Sylwia; Uszakiewicz, Sylwia

    2017-03-01

    The paper presents results of studies concerning the designation of COD fraction in the raw, mechanically treated and biologically treated wastewater. The test object was a wastewater treatment plant with the output of over 20,000 PE. The results were compared with data received in the ASM models. During investigation following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS and in organic suspension non-biodegradable XI. Methodology for determining the COD fraction was based on the guidelines ATV-A 131. The real percentage of each fraction in total COD in raw wastewater are different from data received in ASM models.

  16. Self-consistent implementation of ensemble density functional theory method for multiple strongly correlated electron pairs

    DOE PAGES

    Filatov, Michael; Liu, Fang; Kim, Kwang S.; ...

    2016-12-22

    Here, the spin-restricted ensemble-referenced Kohn-Sham (REKS) method is based on an ensemble representation of the density and is capable of correctly describing the non-dynamic electron correlation stemming from (near-)degeneracy of several electronic configurations. The existing REKS methodology describes systems with two electrons in two fractionally occupied orbitals. In this work, the REKS methodology is extended to treat systems with four fractionally occupied orbitals accommodating four electrons and self-consistent implementation of the REKS(4,4) method with simultaneous optimization of the orbitals and their fractional occupation numbers is reported. The new method is applied to a number of molecular systems where simultaneous dissociationmore » of several chemical bonds takes place, as well as to the singlet ground states of organic tetraradicals 2,4-didehydrometaxylylene and 1,4,6,9-spiro[4.4]nonatetrayl.« less

  17. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  18. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    PubMed

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  19. A methodology for calculating transport emissions in cities with limited traffic data: Case study of diesel particulates and black carbon emissions in Murmansk.

    PubMed

    Kholod, N; Evans, M; Gusev, E; Yu, S; Malyshev, V; Tretyakova, S; Barinov, A

    2016-03-15

    This paper presents a methodology for calculating exhaust emissions from on-road transport in cities with low-quality traffic data and outdated vehicle registries. The methodology consists of data collection approaches and emission calculation methods. For data collection, the paper suggests using video survey and parking lot survey methods developed for the International Vehicular Emissions model. Additional sources of information include data from the largest transportation companies, vehicle inspection stations, and official vehicle registries. The paper suggests using the European Computer Programme to Calculate Emissions from Road Transport (COPERT) 4 model to calculate emissions, especially in countries that implemented European emissions standards. If available, the local emission factors should be used instead of the default COPERT emission factors. The paper also suggests additional steps in the methodology to calculate emissions only from diesel vehicles. We applied this methodology to calculate black carbon emissions from diesel on-road vehicles in Murmansk, Russia. The results from Murmansk show that diesel vehicles emitted 11.7 tons of black carbon in 2014. The main factors determining the level of emissions are the structure of the vehicle fleet and the level of vehicle emission controls. Vehicles without controls emit about 55% of black carbon emissions. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Electron absorbed fractions of energy and S-values in an adult human skeleton based on µCT images of trabecular bone

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Richardson, R. B.; Cassola, V. F.; Vieira, J. W.; Khoury, H. J.; Lira, C. A. B. de O.; Robson Brown, K.

    2011-03-01

    When the human body is exposed to ionizing radiation, among the soft tissues at risk are the active marrow (AM) and the bone endosteum (BE) located in tiny, irregular cavities of trabecular bone. Determination of absorbed fractions (AFs) of energy or absorbed dose in the AM and the BE represent one of the major challenges of dosimetry. Recently, at the Department of Nuclear Energy at the Federal University of Pernambuco, a skeletal dosimetry method based on µCT images of trabecular bone introduced into the spongiosa voxels of human phantoms has been developed and applied mainly to external exposure to photons. This study uses the same method to calculate AFs of energy and S-values (absorbed dose per unit activity) for electron-emitting radionuclides known to concentrate in skeletal tissues. The modelling of the skeletal tissue regions follows ICRP110, which defines the BE as a 50 µm thick sub-region of marrow next to the bone surfaces. The paper presents mono-energetic AFs for the AM and the BE for eight different skeletal regions for electron source energies between 1 keV and 10 MeV. The S-values are given for the beta emitters 14C, 59Fe, 131I, 89Sr, 32P and 90Y. Comparisons with results from other investigations showed good agreement provided that differences between methodologies and trabecular bone volume fractions were properly taken into account. Additionally, a comparison was made between specific AFs of energy in the BE calculated for the actual 50 µm endosteum and the previously recommended 10 µm endosteum. The increase in endosteum thickness leads to a decrease of the endosteum absorbed dose by up to 3.7 fold when bone is the source region, while absorbed dose increases by ~20% when the beta emitters are in marrow.

  1. Pathways to fraction learning: Numerical abilities mediate the relation between early cognitive competencies and later fraction knowledge.

    PubMed

    Ye, Ai; Resnick, Ilyse; Hansen, Nicole; Rodrigues, Jessica; Rinne, Luke; Jordan, Nancy C

    2016-12-01

    The current study investigated the mediating role of number-related skills in the developmental relationship between early cognitive competencies and later fraction knowledge using structural equation modeling. Fifth-grade numerical skills (i.e., whole number line estimation, non-symbolic proportional reasoning, multiplication, and long division skills) mapped onto two distinct factors: magnitude reasoning and calculation. Controlling for participants' (N=536) demographic characteristics, these two factors fully mediated relationships between third-grade general cognitive competencies (attentive behavior, verbal and nonverbal intellectual abilities, and working memory) and sixth-grade fraction knowledge (concepts and procedures combined). However, specific developmental pathways differed by type of fraction knowledge. Magnitude reasoning ability fully mediated paths from all four cognitive competencies to knowledge of fraction concepts, whereas calculation ability fully mediated paths from attentive behavior and verbal ability to knowledge of fraction procedures (all with medium to large effect sizes). These findings suggest that there are partly overlapping, yet distinct, developmental pathways from cognitive competencies to general fraction knowledge, fraction concepts, and fraction procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. User's guide for vectorized code EQUIL for calculating equilibrium chemistry on Control Data STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.

    1980-01-01

    A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.

  3. Evaluation of planar halogenated and polycyclic aromatic hydrocarbons in estuarine sediments using ethoxyresorufin-O-deethylase induction of H4IIE cells

    USGS Publications Warehouse

    Gale, R.W.; Long, E.R.; Schwartz, T.R.; Tillitt, D.E.

    2000-01-01

    Polycyclic aromatic hydrocarbons (PAHs) and planar halogenated hydrocarbons (PHHs), including polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and biphenyls (PCBs) were determined in fractionated sediment extracts from the Hudson-Raritan estuary and Newark Bay, New Jersey, USA, as part of a comprehensive risk assessment. Contributions of PCDDs/PCDFs, PCBs, and PAHs to the total toxic equivalents (TEQs) were measured using an H4IIE bioassay and calculated from instrumentally determined concentrations using international toxic equivalency factors. The H4IIE TEQs of whole and fractionated extracts were compared to calculated TEQs to investigate the applicability of the bioassay approach for evaluating 7-ethoxyresorufin-O-deethylease induction by PHHs and PAHs present together in complex mixtures. Although 2,3,7,8-tetrachlorodibenzo-p-dioxin contributed from 41 to 79% of the calculated TEQs from PCDDs/PCDFs and planar PCBs in all sediments sampled, the PAH-containing fractions accounted for >80% of the total TEQs determined either instrumentally or by bioassay. Calculated TEQs from PAHs, based on reported toxic equivalency factors for only seven PAHs, were severalfold greater than the bioassay-derived TEQs of PAH-only fractions of the sediment extracts. Significant correlations were observed between bioassay and instrumentally determined toxic equivalents in the more purified fractions but not in fractions only purified by size-exclusion or argentate chromatographies alone.

  4. The effect of macro-bending on power confinement factor in single mode fibers

    NASA Astrophysics Data System (ADS)

    Waluyo, T. B.; Bayuwati, D.; Mulyanto, I.

    2018-03-01

    One of the methods to determine the macro-bending effect in a single mode fiber is by calculating its power loss coefficient. We describe an alternative method by using the equation of fractional power in the fiber core. Knowing the fiber parameters such as its core radius, refractive indexes, and operating wavelength; we can calculate the V-number and the fractional power in the core. Because the value of the fiber refractive indexes and the propagation constant are affected by bending, we can calculate the value of the fractional power in the core as a function of the bending radius. We calculate the fractional power in the core of an SMF28 and SM600 fiber and, to verify our calculation, we measure its transmission loss using an optical spectrum analyzer. Our calculations and experimental results showed that for SMF28 fiber, there is about 4% power loss due to bending at 633 nm, about 8% at 1310 nm, about 20% at 1550 nm, and about 60% at 1064 nm. For SM600 fiber, there is about 6% power loss due to bending at 633 nm, about 11% at 850 nm, and this fiber is not suitable for operating wavelength beyond 1000 nm.

  5. Risk assessment for the mercury polluted site near a pesticide plant in Changsha, Hunan, China.

    PubMed

    Dong, Haochen; Lin, Zhijia; Wan, Xiang; Feng, Liu

    2017-02-01

    The distribution characteristics of mercury fractions at the site near a pesticide plant was investigated, with the total mercury concentrations ranging from 0.0250 to 44.3 mg kg -1 . The mercury bound to organic matter and residual mercury were the main fractions, and the most mobile fractions accounted for only 5.9%-9.7%, indicating a relatively low degree of potential risk. The relationships between mercury fractions and soil physicochemical properties were analysed. The results demonstrated that organic matter was one of the most important factors in soil fraction distribution, and both OM and soil pH appeared to have a significant influence on the Fe/Mn oxides of mercury. Together with the methodology of partial correlation analysis, the concept and model of delayed geochemical hazard (DGH) was introduced to reveal the potential transformation paths and chain reactions among different mercury fractions and therefore to have a better understanding of risk development. The results showed that the site may be classified as a low-risk site of mercury DGH with a probability of 10.5%, but it had an easy trend in mercury DGH development due to low critical points of DGH burst. In summary, this study provides a methodology for site risk assessment in terms of static risk and risk development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Multiphase flow calculation software

    DOEpatents

    Fincke, James R.

    2003-04-15

    Multiphase flow calculation software and computer-readable media carrying computer executable instructions for calculating liquid and gas phase mass flow rates of high void fraction multiphase flows. The multiphase flow calculation software employs various given, or experimentally determined, parameters in conjunction with a plurality of pressure differentials of a multiphase flow, preferably supplied by a differential pressure flowmeter or the like, to determine liquid and gas phase mass flow rates of the high void fraction multiphase flows. Embodiments of the multiphase flow calculation software are suitable for use in a variety of applications, including real-time management and control of an object system.

  7. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  8. The study on biomass fraction estimate methodology of municipal solid waste incinerator in Korea.

    PubMed

    Kang, Seongmin; Kim, Seungjin; Lee, Jeongwoo; Yun, Hyunki; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In Korea, the amount of greenhouse gases released due to waste materials was 14,800,000 t CO2eq in 2012, which increased from 5,000,000 t CO2eq in 2010. This included the amount released due to incineration, which has gradually increased since 2010. Incineration was found to be the biggest contributor to greenhouse gases, with 7,400,000 t CO2eq released in 2012. Therefore, with regards to the trading of greenhouse gases emissions initiated in 2015 and the writing of the national inventory report, it is important to increase the reliability of the measurements related to the incineration of waste materials. This research explored methods for estimating the biomass fraction at Korean MSW incinerator facilities and compared the biomass fractions obtained with the different biomass fraction estimation methods. The biomass fraction was estimated by the method using default values of fossil carbon fraction suggested by IPCC, the method using the solid waste composition, and the method using incinerator flue gas. The highest biomass fractions in Korean municipal solid waste incinerator facilities were estimated by the IPCC Default method, followed by the MSW analysis method and the Flue gas analysis method. Therefore, the difference in the biomass fraction estimate was the greatest between the IPCC Default and the Flue gas analysis methods. The difference between the MSW analysis and the flue gas analysis methods was smaller than the difference with IPCC Default method. This suggested that the use of the IPCC default method cannot reflect the characteristics of Korean waste incinerator facilities and Korean MSW. Incineration is one of most effective methods for disposal of municipal solid waste (MSW). This paper investigates the applicability of using biomass content to estimate the amount of CO2 released, and compares the biomass contents determined by different methods in order to establish a method for estimating biomass in the MSW incinerator facilities of Korea. After analyzing the biomass contents of the collected solid waste samples and the flue gas samples, the results were compared with the Intergovernmental Panel on Climate Change (IPCC) method, and it seems that to calculate the biomass fraction it is better to use the flue gas analysis method than the IPCC method. It is valuable to design and operate a real new incineration power plant, especially for the estimation of greenhouse gas emissions.

  9. Quantitative first-principles theory of interface absorption in multilayer heterostructures

    DOE PAGES

    Hachtel, Jordan A.; Sachan, Ritesh; Mishra, Rohan; ...

    2015-09-03

    The unique chemical bonds and electronic states of interfaces result in optical properties that are different from those of the constituting bulk materials. In the nanoscale regime, the interface effects can be dominant and impact the optical response of devices. Using density functional theory (DFT), the interface effects can be calculated, but DFT is computationally limited to small systems. In this paper, we describe a method to combine DFT with macroscopic methodologies to extract the interface effect on absorption in a consistent and quantifiable manner. The extracted interface effects are an independent parameter and can be applied to more complicatedmore » systems. Finally, we demonstrate, using NiSi 2/Si heterostructures, that by varying the relative volume fractions of interface and bulk, we can tune the spectral range of the heterostructure absorption.« less

  10. Study of the AC machines winding having fractional q

    NASA Astrophysics Data System (ADS)

    Bespalov, V. Y.; Sidorov, A. O.

    2018-02-01

    The winding schemes with a fractional numbers of slots per pole and phase q have been known and used for a long time. However, in the literature on the low-noise machines design there are not recommended to use. Nevertheless, fractional q windings have been realized in many applications of special AC electrical machines, allowing to improve their performance, including vibroacoustic one. This paper deals with harmonic analysis of windings having integer and fractional q in permanent magnet synchronous motors, a comparison of their characteristics is performed, frequencies of subharmonics are revealed. Optimal winding pitch design is found giving reduce the amplitudes of subharmonics. Distribution factors for subharmonics, fractional and high-order harmonics are calculated, results analysis is represented, allowing for giving recommendations how to calculate distribution factors for different harmonics when q is fractional.

  11. Calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry assays and its application in supporting microdose absolute bioavailability studies.

    PubMed

    Gu, Huidong; Wang, Jian; Aubry, Anne-Françoise; Jiang, Hao; Zeng, Jianing; Easter, John; Wang, Jun-sheng; Dockens, Randy; Bifano, Marc; Burrell, Richard; Arnold, Mark E

    2012-06-05

    A methodology for the accurate calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays and its application in supporting microdose absolute bioavailability studies are reported for the first time. For simplicity, this calculation methodology and the strategy to minimize the isotopic interference are demonstrated using a simple molecule entity, then applied to actual development drugs. The exact isotopic interferences calculated with this methodology were often much less than the traditionally used, overestimated isotopic interferences simply based on the molecular isotope abundance. One application of the methodology is the selection of a stable isotopically labeled internal standard (SIL-IS) for an LC-MS/MS bioanalytical assay. The second application is the selection of an SIL analogue for use in intravenous (i.v.) microdosing for the determination of absolute bioavailability. In the case of microdosing, the traditional approach of calculating isotopic interferences can result in selecting a labeling scheme that overlabels the i.v.-dosed drug or leads to incorrect conclusions on the feasibility of using an SIL drug and analysis by LC-MS/MS. The methodology presented here can guide the synthesis by accurately calculating the isotopic interferences when labeling at different positions, using different selective reaction monitoring (SRM) transitions or adding more labeling positions. This methodology has been successfully applied to the selection of the labeled i.v.-dosed drugs for use in two microdose absolute bioavailability studies, before initiating the chemical synthesis. With this methodology, significant time and cost saving can be achieved in supporting microdose absolute bioavailability studies with stable labeled drugs.

  12. Recalculation of dose for each fraction of treatment on TomoTherapy.

    PubMed

    Thomas, Simon J; Romanchikova, Marina; Harrison, Karl; Parker, Michael A; Bates, Amy M; Scaife, Jessica E; Sutcliffe, Michael P F; Burnet, Neil G

    2016-01-01

    The VoxTox study, linking delivered dose to toxicity requires recalculation of typically 20-37 fractions per patient, for nearly 2000 patients. This requires a non-interactive interface permitting batch calculation with multiple computers. Data are extracted from the TomoTherapy(®) archive and processed using the computational task-management system GANGA. Doses are calculated for each fraction of radiotherapy using the daily megavoltage (MV) CT images. The calculated dose cube is saved as a digital imaging and communications in medicine RTDOSE object, which can then be read by utilities that calculate dose-volume histograms or dose surface maps. The rectum is delineated on daily MV images using an implementation of the Chan-Vese algorithm. On a cluster of up to 117 central processing units, dose cubes for all fractions of 151 patients took 12 days to calculate. Outlining the rectum on all slices and fractions on 151 patients took 7 h. We also present results of the Hounsfield unit (HU) calibration of TomoTherapy MV images, measured over an 8-year period, showing that the HU calibration has become less variable over time, with no large changes observed after 2011. We have developed a system for automatic dose recalculation of TomoTherapy dose distributions. This does not tie up the clinically needed planning system but can be run on a cluster of independent machines, enabling recalculation of delivered dose without user intervention. The use of a task management system for automation of dose calculation and outlining enables work to be scaled up to the level required for large studies.

  13. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  14. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE PAGES

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    2016-04-01

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  15. Retrospective dose assessment for the population living in areas of local fallout from the Semipalatinsk Nuclear Test Site Part II: Internal exposure to thyroid.

    PubMed

    Gordeev, Konstantin; Shinkarev, Sergey; Ilyin, Leonid; Bouville, André; Hoshi, Masaharu; Luckyanov, Nickolas; Simon, Steven L

    2006-02-01

    A methodology to assess internal exposure to thyroid from radioiodines for the residents living in settlements located in the vicinity of the Semipalatinsk Nuclear Test Site is described that is the result of many years of research, primarily at the Moscow Institute of Biophysics. This methodology introduces two important concepts. First, the biologically active fraction, is defined as the fraction of the total activity on fallout particles with diameter less than 50 microns. That fraction is retained by vegetation and will ultimately result in contamination of dairy products. Second, the relative distance is derived as a dimensionless quantity from information on test yield, maximum height of cloud, and average wind velocity and describes how the biologically active fraction is distributed with distance from the site of the explosion. The parameter is derived in such a way that at locations with equal values of relative distance, the biologically active fraction will be the same for any test. The estimates of internal exposure to thyroid for the residents of Dolon and Kanonerka villages, for which the external exposure were assessed and given in a companion paper (Gordeev et al. 2006) in this conference, are presented. The main sources of uncertainty in the estimates are identified.

  16. Analytical methodologies for broad metabolite coverage of exhaled breath condensate.

    PubMed

    Aksenov, Alexander A; Zamuruyev, Konstantin O; Pasamontes, Alberto; Brown, Joshua F; Schivo, Michael; Foutouhi, Soraya; Weimer, Bart C; Kenyon, Nicholas J; Davis, Cristina E

    2017-09-01

    Breath analysis has been gaining popularity as a non-invasive technique that is amenable to a broad range of medical uses. One of the persistent problems hampering the wide application of the breath analysis method is measurement variability of metabolite abundances stemming from differences in both sampling and analysis methodologies used in various studies. Mass spectrometry has been a method of choice for comprehensive metabolomic analysis. For the first time in the present study, we juxtapose the most commonly employed mass spectrometry-based analysis methodologies and directly compare the resultant coverages of detected compounds in exhaled breath condensate in order to guide methodology choices for exhaled breath condensate analysis studies. Four methods were explored to broaden the range of measured compounds across both the volatile and non-volatile domain. Liquid phase sampling with polyacrylate Solid-Phase MicroExtraction fiber, liquid phase extraction with a polydimethylsiloxane patch, and headspace sampling using Carboxen/Polydimethylsiloxane Solid-Phase MicroExtraction (SPME) followed by gas chromatography mass spectrometry were tested for the analysis of volatile fraction. Hydrophilic interaction liquid chromatography and reversed-phase chromatography high performance liquid chromatography mass spectrometry were used for analysis of non-volatile fraction. We found that liquid phase breath condensate extraction was notably superior compared to headspace extraction and differences in employed sorbents manifested altered metabolite coverages. The most pronounced effect was substantially enhanced metabolite capture for larger, higher-boiling compounds using polyacrylate SPME liquid phase sampling. The analysis of the non-volatile fraction of breath condensate by hydrophilic and reverse phase high performance liquid chromatography mass spectrometry indicated orthogonal metabolite coverage by these chromatography modes. We found that the metabolite coverage could be enhanced significantly with the use of organic solvent as a device rinse after breath sampling to collect the non-aqueous fraction as opposed to neat breath condensate sample. Here, we show the detected ranges of compounds in each case and provide a practical guide for methodology selection for optimal detection of specific compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Video-Based Intervention in Teaching Fraction Problem-Solving to Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Hornberger, Erin

    2015-01-01

    The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with…

  18. Video-Based Intervention in Teaching Fraction Problem-Solving to Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin

    2015-09-01

    The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.

  19. Equilibrium fractionation of H and O isotopes in water from path integral molecular dynamics

    NASA Astrophysics Data System (ADS)

    Pinilla, Carlos; Blanchard, Marc; Balan, Etienne; Ferlat, Guillaume; Vuilleumier, Rodolphe; Mauri, Francesco

    2014-06-01

    The equilibrium fractionation factor between two phases is of importance for the understanding of many planetary and environmental processes. Although thermodynamic equilibrium can be achieved between minerals at high temperature, many natural processes involve reactions between liquids or aqueous solutions and solids. For crystals, the fractionation factor α can be theoretically determined using a statistical thermodynamic approach based on the vibrational properties of the phases. These calculations are mostly performed in the harmonic approximation, using empirical or ab-initio force fields. In the case of aperiodic and dynamic systems such as liquids or solutions, similar calculations can be done using finite-size molecular clusters or snapshots obtained from molecular dynamics (MD) runs. It is however difficult to assess the effect of these approximate models on the isotopic fractionation properties. In this work we present a systematic study of the calculation of the D/H and 18O/16O equilibrium fractionation factors in water for the liquid/vapour and ice/vapour phases using several levels of theory within the simulations. Namely, we use a thermodynamic integration approach based on Path Integral MD calculations (PIMD) and an empirical potential model of water. Compared with standard MD, PIMD takes into account quantum effects in the thermodynamic modeling of systems and the exact fractionation factor for a given potential can be obtained. We compare these exact results with those of modeling strategies usually used, which involve the mapping of the quantum system on its harmonic counterpart. The results show the importance of including configurational disorder for the estimation of isotope fractionation in liquid phases. In addition, the convergence of the fractionation factor as a function of parameters such as the size of the simulated system and multiple isotope substitution is analyzed, showing that isotope fractionation is essentially a local effect in the investigated system.

  20. Scalar mixing and strain dynamics methodologies for PIV/LIF measurements of vortex ring flows

    NASA Astrophysics Data System (ADS)

    Bouremel, Yann; Ducci, Andrea

    2017-01-01

    Fluid mixing operations are central to possibly all chemical, petrochemical, and pharmaceutical industries either being related to biphasic blending in polymerisation processes, cell suspension for biopharmaceuticals production, and fractionation of complex oil mixtures. This work aims at providing a fundamental understanding of the mixing and stretching dynamics occurring in a reactor in the presence of a vortical structure, and the vortex ring was selected as a flow paradigm of vortices commonly encountered in stirred and shaken reactors in laminar flow conditions. High resolution laser induced fluorescence and particle imaging velocimetry measurements were carried out to fully resolve the flow dissipative scales and provide a complete data set to fully assess macro- and micro-mixing characteristics. The analysis builds upon the Lamb-Oseen vortex work of Meunier and Villermaux ["How vortices mix," J. Fluid Mech. 476, 213-222 (2003)] and the engulfment model of Baldyga and Bourne ["Simplification of micromixing calculations. I. Derivation and application of new model," Chem. Eng. J. 42, 83-92 (1989); "Simplification of micromixing calculations. II. New applications," ibid. 42, 93-101 (1989)] which are valid for diffusion-free conditions, and a comparison is made between three methodologies to assess mixing characteristics. The first method is commonly used in macro-mixing studies and is based on a control area analysis by estimating the variation in time of the concentration standard deviation, while the other two are formulated to provide an insight into local segregation dynamics, by either using an iso-concentration approach or an iso-concentration gradient approach to take into account diffusion.

  1. Distribution of electron density and magnetocapacitance in the regime of the fractional quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Pikus, F. G.; Efros, A. L.

    1993-06-01

    A two-dimensional electron liquid (TDEL), subjected to a smooth random potential, is studied in the regime of the fractional quantum Hall effect. An analytical theory of the nonlinear screening is presented for the case when the fractional gap is much less than the magnitude of the unscreened random potential. In this ``narrow-gap approximation'' (NGA), we calculate the electron density distribution function, the fraction of the TDEL which is in the incompressible state, and the thermodynamic density of states. The magnetocapacitance is calculated to compare with the recent experiments. The NGA is found to be not accurate enough to describe the data. The results for larger fractional gaps are obtained by computer modeling. To fit the recent experimental data we have also taken into account the anyon-anyon interaction in the vicinity of a fractional singularity.

  2. Reference limits for urinary fractional excretion of electrolytes in adult non-racing Greyhound dogs.

    PubMed

    Bennett, S L; Abraham, L A; Anderson, G A; Holloway, S A; Parry, B W

    2006-11-01

    To determine reference limits for urinary fractional excretion of electrolytes in Greyhound dogs. Urinary fractional excretion was calculated using a spot clearance method preceded by a 16 to 20 hour fast in 48 Greyhound dogs. Raw data analysed using the bootstrap estimate was used to calculate the reference limits. The observed range for urinary fractional excretion in Greyhound dogs was 0.0 to 0.77% for sodium, 0.9 to 14.7% for potassium, 0 to 0.66% for chloride, 0.03 to 0.22% for calcium and 0.4 to 20.1% for phosphate. Expressed as percentages, the suggested reference limits for fractional excretion in Greyhound dogs are as follows: sodium < or = 0.72, potassium < or = 12.2, chloride < or = 0.55, calcium < or = 0.13 and phosphate < or = 16.5. Veterinary practitioners may use these reference limits for urinary electrolyte fractional excretion when investigating renal tubular disease in Greyhound dogs.

  3. 76 FR 82115 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Delay of Effective Date

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-30

    ... year (FY) 2012. The Wage Rule revised the methodology by which we calculate the prevailing wages to be... 19, 2011, 76 FR 3452. The Wage Rule revised the methodology by which we calculate the prevailing... November 30, 2011. When the Wage Rule goes into effect, it will supersede and make null the prevailing wage...

  4. SU-F-BRF-12: Investigating Dosimetric Effects of Inter-Fraction Deformation in Lung Cancer Stereotactic Body Radiotherapy (SBRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, J; Tian, Z; Gu, X

    2014-06-15

    Purpose: We studied dosimetric effects of inter-fraction deformation in lung stereotactic body radiotherapy (SBRT), in order to investigate the necessity of adaptive re-planning for lung SBRT treatments. Methods: Six lung cancer patients with different treatment fractions were retrospectively investigated. All the patients were immobilized and localized with a stereotactic body frame and were treated under cone-beam CT (CBCT) image guidance at each fraction. We calculated the actual delivered dose of the treatment plan using the up-to-date patient geometry of each fraction, and compared the dose with the intended plan dose to investigate the dosimetric effects of the inter-fraction deformation. Deformablemore » registration was carried out between the treatment planning CT and the CBCT of each fraction to obtain deformed planning CT for more accurate dose calculations of the delivered dose. The extent of the inter-fraction deformation was also evaluated by calculating the dice similarity coefficient between the delineated structures on the planning CT and those on the deformed planning CT. Results: The average dice coefficients for PTV, spinal cord, esophagus were 0.87, 0.83 and 0.69, respectively. The volume of PTV covered by prescription dose was decreased by 23.78% on average for all fractions and all patients. For spinal cord and esophagus, the volumes covered by the constraint dose were increased by 4.57% and 3.83%. The maximum dose was also increased by 4.11% for spinal cord and 4.29% for esophagus. Conclusion: Due to inter-fraction deformation, large deterioration was found in both PTV coverage and OAR sparing, which demonstrated the needs for adaptive re-planning of lung SBRT cases to improve target coverage while reducing radiation dose to nearby normal tissues.« less

  5. 40 CFR 98.213 - Calculating GHG emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calcination fractions with Equation U-1 of this section. ER30OC09.077 Where: ECO2 = Annual CO2 mass emissions... ton carbonate consumed. Fi = Fraction calcination achieved for each particular carbonate type i (decimal fraction). As an alternative to measuring the calcination fraction, a value of 1.0 can be used. n...

  6. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…

  7. Calculation methods to perform mass balances of micropollutants in sewage treatment plants. application to pharmaceutical and personal care products (PPCPs).

    PubMed

    Carballa, Marta; Omil, Francisco; Lema, Juan M

    2007-02-01

    Two different methods are proposed to perform the mass balance calculations of micropollutants in sewage treatment plants (STPs). The first method uses the measured data in both liquid and sludge phase and the second one uses the solid-water distribution coefficient (Kd) to calculate the concentrations in the sludge from those measured in the liquid phase. The proposed methodologies facilitate the identification of the main mechanisms involved in the elimination of micropollutants. Both methods are applied for determining mass balances of selected pharmaceutical and personal care products (PPCPs) and their results are discussed. In that way, the fate of 2 musks (galaxolide and tonalide), 3 pharmaceuticals (ibuprofen, naproxen, and sulfamethoxazole), and 2 natural estrogens (estrone and 17beta-estradiol) has been investigated along the different water and sludge treatment units of a STP. Ibuprofen, naproxen, and sulfamethoxazole are biologically degraded in the aeration tank (50-70%), while musks are equally sorbed to the sludge and degraded. In contrast, estrogens are not removed in the STP studied. About 40% of the initial load of pharmaceuticals passes through the plant unaltered, with the fraction associated to sludge lower than 0.5%. In contrast, between 20 and 40% of the initial load of musks leaves the plant associated to solids, with less than 10% present in the final effluent. The results obtained show that the conclusions concerning the efficiency of micropollutants removal in a particular STP may be seriously affected by the calculation method used.

  8. Predicting the ash behavior during biomass combustion in FBC conditions by combining advanced fuel analyses with thermodynamic multicomponent equilibrium calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skrifvars, B.J.; Blomquist, J.P.; Hupa, M.

    1998-12-31

    Previous work at Aabo Akademi University has focused on identification and quantification of various sintering mechanisms which are relevant for problematic ash behavior during biomass combustion in fluidized bed combustion conditions, and on multi-component multi-phase thermodynamic phase equilibrium calculations of ash chemistry in these conditions. In both areas new information has been developed and useful modeling capabilities have been created. Based on the previous work, the authors now present a novel approach of using a combination of an advanced fuel analysis method and thermodynamic phase equilibrium calculations to predict the chemical and thermal behavior of the ash when firing biomass.more » Four different fuels [coal, forest residues, wood chips, and a mixture of forest residue and wood chips] were analyzed using the chemical fractionation analysis technique. Based on the results from these analyses, the authors formed two different ash fractions, (1) one fine sized fraction consisting of those elements found in the water and weak acid leach, and (2) a coarse ash particle fraction consisting of those elements found in the strong acid leach and non-leachable rest. The small sized ash fraction was then assumed to be carried up with the flue gases and consequently formed the base for any ash related problems in the flue gas channel. This fraction was therefore analyzed on its chemical and thermal behavior using multi-component multi-phase equilibrium calculations, by which the composition and the melting behavior was estimated as a function of the temperature. The amount of melt, which has earlier been found to be strongly related to problematic ash behavior, was finally expressed as a function of the temperature for the fraction. The coarse fraction was treated separately. Here the authors estimate the composition only. The paper discusses the results and their relevance to full scale combustion.« less

  9. Statistical Ring Opening Metathesis Copolymerization of Norbornene and Cyclopentene by Grubbs' 1st-Generation Catalyst.

    PubMed

    Nikovia, Christiana; Maroudas, Andreas-Philippos; Goulis, Panagiotis; Tzimis, Dionysios; Paraskevopoulou, Patrina; Pitsikalis, Marinos

    2015-08-27

    Statistical copolymers of norbornene (NBE) with cyclopentene (CP) were prepared by ring-opening metathesis polymerization, employing the 1st-generation Grubbs' catalyst, in the presence or absence of triphenylphosphine, PPh₃. The reactivity ratios were estimated using the Finemann-Ross, inverted Finemann-Ross, and Kelen-Tüdos graphical methods, along with the computer program COPOINT, which evaluates the parameters of binary copolymerizations from comonomer/copolymer composition data by integrating a given copolymerization equation in its differential form. Structural parameters of the copolymers were obtained by calculating the dyad sequence fractions and the mean sequence length, which were derived using the monomer reactivity ratios. The kinetics of thermal decomposition of the copolymers along with the respective homopolymers was studied by thermogravimetric analysis within the framework of the Ozawa-Flynn-Wall and Kissinger methodologies. Finally, the effect of triphenylphosphine on the kinetics of copolymerization, the reactivity ratios, and the kinetics of thermal decomposition were examined.

  10. AREVA Team Develops Sump Strainer Blockage Solution for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phan, Ray

    2006-07-01

    The purpose of this paper is to discuss the methodology, testing challenges, and results of testing that a team of experts from Areva NP, Alden Research Laboratory, Inc (ALDEN), and Performance Contracting Inc. (PCI) has developed. The team is currently implementing a comprehensive solution to the issue of Emergency Core Cooling System (ECCS) sump strainer blockage facing Pressurized Water Reactor (PWR) Nuclear Plants. The team has successfully demonstrated two key results from the testing of passive Sure-FlowTM strainers, which were designed to distribute the required flow over a large surface area resulting in extremely low approach velocities. First, the actualmore » head loss (pressure drop) as tested, across the prototype strainers, was much lower than the calculated head loss using the Nuclear Regulatory Commission (NRC) approved NUREG/CR-6224 head loss correlation. Second, the penetration fractions were much lower than those seen in the NRC sponsored debris penetration tests. (author)« less

  11. Methodological studies on the VVER-440 control assembly calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hordosy, G.; Kereszturi, A.; Maraczy, C.

    1995-12-31

    The control assembly regions of VVER-440 reactors are represented by 2-group albedo matrices in the global calculations of the KARATE code system. Some methodological aspects of calculating albedo matrices with the COLA transport code are presented. Illustrations are given how these matrices depend on the relevant parameters describing the boron steel and steel regions of the control assemblies. The calculation of the response matrix for a node consisting of two parts filled with different materials is discussed.

  12. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Fractionation of Forest Residues of Douglas-fir for Fermentable Sugar Production by SPORL Pretreatment

    Treesearch

    Chao Zhang; J.Y. Zhu; Roland Gleisner; John Sessions

    2012-01-01

    Douglas-fir (Pseudotsuga menziesii) forest residues were physically fractionated through sieving. The bark and wood were separated for large-sized fractions (>12.7 mm), and their contents were determined. The chemical compositions of the large fractions were calculated based on the contents and chemical compositions of the bark and wood. The...

  14. 76 FR 71431 - Civil Penalty Calculation Methodology

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... DEPARTMENT OF TRANSPORTATION Federal Motor Carrier Safety Administration Civil Penalty Calculation... is currently evaluating its civil penalty methodology. Part of this evaluation includes a forthcoming... civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The...

  15. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…

  16. Cognitive Predictors of Calculations and Number Line Estimation with Whole Numbers and Fractions among At-Risk Students

    ERIC Educational Resources Information Center

    Namkung, Jessica M.; Fuchs, Lynn S.

    2015-01-01

    The purpose of this study was to examine the cognitive predictors of calculations and number line estimation with whole numbers and fractions. At-risk 4th-grade students (N = 139) were assessed on 7 domain-general abilities (i.e., working memory, processing speed, concept formation, language, attentive behavior, and nonverbal reasoning) and…

  17. Cognitive Predictors of Calculations and Number Line Estimation with Whole Numbers and Fractions among At-Risk Students

    ERIC Educational Resources Information Center

    Namkung, Jessica M.; Fuchs, Lynn S.

    2016-01-01

    The purpose of this study was to examine the cognitive predictors of calculations and number line estimation with whole numbers and fractions. At-risk 4th-grade students (N = 139) were assessed on 6 domain-general abilities (i.e., working memory, processing speed, concept formation, language, attentive behavior, and nonverbal reasoning) and…

  18. Modeling the Spray Forming of H13 Steel Tooling

    NASA Astrophysics Data System (ADS)

    Lin, Yaojun; McHugh, Kevin M.; Zhou, Yizhang; Lavernia, Enrique J.

    2007-07-01

    On the basis of a numerical model, the temperature and liquid fraction of spray-formed H13 tool steel are calculated as a function of time. Results show that a preheated substrate at the appropriate temperature can lead to very low porosity by increasing the liquid fraction in the deposited steel. The calculated cooling rate can lead to a microstructure consisting of martensite, lower bainite, retained austenite, and proeutectoid carbides in as-spray-formed material. In the temperature range between the solidus and liquidus temperatures, the calculated temperature of the spray-formed material increases with increasing substrate preheat temperature, resulting in a very low porosity by increasing the liquid fraction of the deposited steel. In the temperature region where austenite decomposition occurs, the substrate preheat temperature has a negligible influence on the cooling rate of the spray-formed material. On the basis of the calculated results, it is possible to generate sufficient liquid fraction during spray forming by using a high growth rate of the deposit without preheating the substrate, and the growth rate of the deposit has almost no influence on the cooling rate in the temperature region of austenite decomposition.

  19. Discrete Fractional Component Monte Carlo Simulation Study of Dilute Nonionic Surfactants at the Air-Water Interface.

    PubMed

    Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J

    2017-09-26

    We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.

  20. Quantification of dose uncertainties for the bladder in prostate cancer radiotherapy based on dominant eigenmodes

    NASA Astrophysics Data System (ADS)

    Rios, Richard; Acosta, Oscar; Lafond, Caroline; Espinosa, Jairo; de Crevoisier, Renaud

    2017-11-01

    In radiotherapy for prostate cancer the dose at the treatment planning for the bladder may be a bad surrogate of the actual delivered dose as the bladder presents the largest inter-fraction shape variations during treatment. This paper presents PCA models as a virtual tool to estimate dosimetric uncertainties for the bladder produced by motion and deformation between fractions. Our goal is to propose a methodology to determine the minimum number of modes required to quantify dose uncertainties of the bladder for motion/deformation models based on PCA. We trained individual PCA models using the bladder contours available from three patients with a planning computed tomography (CT) and on-treatment cone-beam CTs (CBCTs). Based on the above models and via deformable image registration (DIR), we estimated two accumulated doses: firstly, an accumulated dose obtained by integrating the planning dose over the Gaussian probability distribution of the PCA model; and secondly, an accumulated dose obtained by simulating treatment courses via a Monte Carlo approach. We also computed a reference accumulated dose for each patient using his available images via DIR. Finally, we compared the planning dose with the three accumulated doses, and we calculated local dose variability and dose-volume histogram uncertainties.

  1. SPENVIS Implementation of End-of-Life Solar Cell Calculations Using the Displacement Damage Dose Methodology

    NASA Technical Reports Server (NTRS)

    Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan

    2007-01-01

    This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.

  2. Fractions: Activities and Exercises for Teaching Fractions in Secondary Schools. Series of Caribbean Volunteer Publications, No. 4.

    ERIC Educational Resources Information Center

    Voluntary Services Overseas, Castries (St. Lucia).

    This document contains materials from a half day workshop held at Petit Secondary School for mathematics teachers at Petit Bordel and Troumaca Ontario Secondary School on the island of St. Vincent in the Caribbean. This book advocates the use of activity-based mathematics as a teaching methodology in secondary schools and demonstrates the use of…

  3. Investigation of nanoparticle agglomeration on the effective thermal conductivity of a composite material

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.

    Phase Change Materials (PCMs), like paraffin wax, can be used for passive thermal management of portable electronics if their overall bulk thermal conductivity is increased through the addition of highly conducting nanoparticles. Finite Element Analysis (FEA) is used to investigate the influence of nanoparticle agglomeration on the overall conductive thermal transport in a nanoenhanced composite by dictating the thermal conductivity of individual elements according to their local inclusion volume fraction and characteristics inside a low conducting PCM matrix. The inclusion density distribution is dictated by an agglomeration factor, and the effective thermal conductivity of each element is calculated from the nanoparticle volume fraction using a method similar to the Representative Volume Element (RVE) methodology. FEA studies are performed for 2-D and 3-D models. In the 2-D model, the grain boundary is fixed at x = 0 for simplicity. For the 3-D model, the grain boundary geometry is randomly varied. A negligible 2-D effect on thermal transport in the 2-D model is seen, so a 1-D thermal resistance network is created for comparison, and the results agree within 4%.The influence of the agglomeration factor and contact Biot number on the overall bulk thermal conductivity is determined by applying Fourier's Law on the entire simulated composite. For the 2-D and 3-D models with a contact Biot number above 1, the overall bulk thermal conductivity decreases prior to the percolation threshold being met and then increases with increasing agglomeration. Finally, a MatlabRTM based image processing tool is created to estimate the agglomeration factor based on an experimental image of a nanoparticle distribution, with a calculated approximate agglomeration value of Beta*L = 5 which results in a bulk thermal conductivity of 0.278 W/(m-K).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jena, Puru; Kandalam, Anil K.; Christian, Theresa M.

    Gallium phosphide bismide (GaP1-xBix) epilayers with bismuth fractions from 0.9% to 3.2%, as calculated from lattice parameter measurements, were studied with Rutherford backscattering spectrometry (RBS) to directly measure bismuth incorporation. The total bismuth fractions found by RBS were higher than expected from the lattice parameter calculations. Furthermore, in one analyzed sample grown by molecular beam epitaxy at 300 degrees C, 55% of incorporated bismuth was found to occupy interstitial sites. We discuss implications of this high interstitial incorporation fraction and its possible relationship to x-ray diffraction and photoluminescence measurements of GaP0.99Bi0.01.

  5. Aqueous solubility calculation for petroleum mixtures in soil using comprehensive two-dimensional gas chromatography analysis data.

    PubMed

    Mao, Debin; Lookman, Richard; Van De Weghe, Hendrik; Vanermen, Guido; De Brucker, Nicole; Diels, Ludo

    2009-04-03

    An assessment of aqueous solubility (leaching potential) of soil contaminations with petroleum hydrocarbons (TPH) is important in the context of the evaluation of (migration) risks and soil/groundwater remediation. Field measurements using monitoring wells often overestimate real TPH concentrations in case of presence of pure oil in the screened interval of the well. This paper presents a method to calculate TPH equilibrium concentrations in groundwater using soil analysis by high-performance liquid chromatography followed by comprehensive two-dimensional gas chromatography (HPLC-GCXGC). The oil in the soil sample is divided into 79 defined hydrocarbon fractions on two GCXGC color plots. To each of these fractions a representative water solubility is assigned. Overall equilibrium water solubility of the non-aqueous phase liquid (NAPL) present in the sample and the water phase's chemical composition (in terms of the 79 fractions defined) are then calculated using Raoult's law. The calculation method was validated using soil spiked with 13 different TPH mixtures and 1 field-contaminated soil. Measured water solubilities using a column recirculation equilibration experiment agreed well to calculated equilibrium concentrations and water phase TPH composition.

  6. Molecular dynamics simulation for the test of calibrated OPLS-AA force field for binary liquid mixture of tri-iso-amyl phosphate and n-dodecane.

    PubMed

    Das, Arya; Ali, Sk Musharaf

    2018-02-21

    Tri-isoamyl phosphate (TiAP) has been proposed to be an alternative for tri-butyl phosphate (TBP) in the Plutonium Uranium Extraction (PUREX) process. Recently, we have successfully calibrated and tested all-atom optimized potentials for liquid simulations using Mulliken partial charges for pure TiAP, TBP, and dodecane by performing molecular dynamics (MD) simulation. It is of immense importance to extend this potential for the various molecular properties of TiAP and TiAP/n-dodecane binary mixtures using MD simulation. Earlier, efforts were devoted to find out a suitable force field which can explain both structural and dynamical properties by empirical parameterization. Therefore, the present MD study reports the structural, dynamical, and thermodynamical properties with different mole fractions of TiAP-dodecane mixtures at the entire range of mole fraction of 0-1 employing our calibrated Mulliken embedded optimized potentials for liquid simulation (OPLS) force field. The calculated electric dipole moment of TiAP was seen to be almost unaffected by the TiAP concentration in the dodecane diluent. The calculated liquid densities of the TiAP-dodecane mixture are in good agreement with the experimental data. The mixture densities at different temperatures are also studied which was found to be reduced with temperature as expected. The plot of diffusivities for TiAP and dodecane against mole fraction in the binary mixture intersects at a composition in the range of 25%-30% of TiAP in dodecane, which is very much closer to the TBP/n-dodecane composition used in the PUREX process. The excess volume of mixing was found to be positive for the entire range of mole fraction and the excess enthalpy of mixing was shown to be endothermic for the TBP/n-dodecane mixture as well as TiAP/n-dodecane mixture as reported experimentally. The spatial pair correlation functions are evaluated between TiAP-TiAP and TiAP-dodecane molecules. Further, shear viscosity has been computed by performing the non-equilibrium molecular dynamics employing the periodic perturbation method. The calculated shear viscosity of the binary mixture is found to be in excellent agreement with the experimental values. The use of the newly calibrated OPLS force field embedding Mulliken charges is shown to be equally reliable in predicting the structural and dynamical properties for the mixture without incorporating any arbitrary scaling in the force field or Lennard-Jones parameters. Further, the present MD simulation results demonstrate that the Stokes-Einstein relation breaks down at the molecular level. The present methodology might be adopted to evaluate the liquid state properties of an aqueous-organic biphasic system, which is of great significance in the interfacial science and technology.

  7. Molecular dynamics simulation for the test of calibrated OPLS-AA force field for binary liquid mixture of tri-iso-amyl phosphate and n-dodecane

    NASA Astrophysics Data System (ADS)

    Das, Arya; Ali, Sk. Musharaf

    2018-02-01

    Tri-isoamyl phosphate (TiAP) has been proposed to be an alternative for tri-butyl phosphate (TBP) in the Plutonium Uranium Extraction (PUREX) process. Recently, we have successfully calibrated and tested all-atom optimized potentials for liquid simulations using Mulliken partial charges for pure TiAP, TBP, and dodecane by performing molecular dynamics (MD) simulation. It is of immense importance to extend this potential for the various molecular properties of TiAP and TiAP/n-dodecane binary mixtures using MD simulation. Earlier, efforts were devoted to find out a suitable force field which can explain both structural and dynamical properties by empirical parameterization. Therefore, the present MD study reports the structural, dynamical, and thermodynamical properties with different mole fractions of TiAP-dodecane mixtures at the entire range of mole fraction of 0-1 employing our calibrated Mulliken embedded optimized potentials for liquid simulation (OPLS) force field. The calculated electric dipole moment of TiAP was seen to be almost unaffected by the TiAP concentration in the dodecane diluent. The calculated liquid densities of the TiAP-dodecane mixture are in good agreement with the experimental data. The mixture densities at different temperatures are also studied which was found to be reduced with temperature as expected. The plot of diffusivities for TiAP and dodecane against mole fraction in the binary mixture intersects at a composition in the range of 25%-30% of TiAP in dodecane, which is very much closer to the TBP/n-dodecane composition used in the PUREX process. The excess volume of mixing was found to be positive for the entire range of mole fraction and the excess enthalpy of mixing was shown to be endothermic for the TBP/n-dodecane mixture as well as TiAP/n-dodecane mixture as reported experimentally. The spatial pair correlation functions are evaluated between TiAP-TiAP and TiAP-dodecane molecules. Further, shear viscosity has been computed by performing the non-equilibrium molecular dynamics employing the periodic perturbation method. The calculated shear viscosity of the binary mixture is found to be in excellent agreement with the experimental values. The use of the newly calibrated OPLS force field embedding Mulliken charges is shown to be equally reliable in predicting the structural and dynamical properties for the mixture without incorporating any arbitrary scaling in the force field or Lennard-Jones parameters. Further, the present MD simulation results demonstrate that the Stokes-Einstein relation breaks down at the molecular level. The present methodology might be adopted to evaluate the liquid state properties of an aqueous-organic biphasic system, which is of great significance in the interfacial science and technology.

  8. Monte Carlo calculations of the cellular S-values for α-particle-emitting radionuclides incorporated into the nuclei of cancer cells of the MDA-MB231, MCF7 and PC3 lines.

    PubMed

    Rojas-Calderón, E L; Ávila, O; Ferro-Flores, G

    2018-05-01

    S-values (dose per unit of cumulated activity) for alpha particle-emitting radionuclides and monoenergetic alpha sources placed in the nuclei of three cancer cell models (MCF7, MDA-MB231 breast cancer cells and PC3 prostate cancer cells) were obtained by Monte Carlo simulation. The MCNPX code was used to calculate the fraction of energy deposited in the subcellular compartments due to the alpha sources in order to obtain the S-values. A comparison with internationally accepted S-values reported by the MIRD Cellular Committee for alpha sources in three sizes of spherical cells was also performed leading to an agreement within 4% when an alpha extended source uniformly distributed in the nucleus is simulated. This result allowed to apply the Monte Carlo Methodology to evaluate S-values for alpha particles in cancer cells. The calculation of S-values for nucleus, cytoplasm and membrane of cancer cells considering their particular geometry, distribution of the radionuclide source and chemical composition by means of Monte Carlo simulation provides a good approach for dosimetry assessment of alpha emitters inside cancer cells. Results from this work provide information and tools that may help researchers in the selection of appropriate radiopharmaceuticals in alpha-targeted cancer therapy and improve its dosimetry evaluation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    NASA Astrophysics Data System (ADS)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  10. Notice of Data Availability for Federal Implementation Plans To Reduce Interstate Transport of Fine Particulate Matter and Ozone: Request for Comment (76 FR 1109)

    EPA Pesticide Factsheets

    This NODA requests public comment on two alternative allocation methodologies for existing units, on the unit-level allocations calculated using those alternative methodologies, on the data supporting the calculations, and on any resulting implications.

  11. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  12. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  13. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  14. Isolation of Autolysosomes from Tobacco BY-2 Cells.

    PubMed

    Takatsuka, Chihiro; Inoue-Aono, Yuko; Moriyasu, Yuji

    2017-01-01

    Autolysosomes are organelles that sequester and degrade a portion of the cytoplasm during autophagy. Although autophagosomes are short lived compared to other organelles such as mitochondria, plastids, and peroxisomes, many autolysosomes accumulate in tobacco BY-2 cells cultured under sucrose starvation conditions in the presence of a cysteine protease inhibitor. We here describe our methodology for isolating autolysosomes from BY-2 cells by conventional cell fractionation using a Percoll gradient. The autolysosome fraction separates clearly from fractions containing mitochondria and peroxisomes. It contains acid phosphatase, vacuolar H + -ATPase, and protease activity. Electron micrographs show that the fraction contains partially degraded cytoplasm seen in autolysosomes before isolation although an autolysosome structure is only partially preserved.

  15. On the upper ocean turbulent dissipation rate due to microscale breakers and small whitecaps

    NASA Astrophysics Data System (ADS)

    Banner, Michael L.; Morison, Russel P.

    2018-06-01

    In ocean wave modelling, accurately computing the evolution of the wind-wave spectrum depends on the source terms and the spectral bandwidth used. The wave dissipation rate source term which spectrally quantifies wave breaking and other dissipative processes remains poorly understood, including the spectral bandwidth needed to capture the essential model physics. The observational study of Sutherland and Melville (2015a) investigated the relative dissipation rate contributions of breaking waves, from large-scale whitecaps to microbreakers. They concluded that a large fraction of wave energy was dissipated by microbreakers. However, in strong contrast with their findings, our analysis of their data and other recent data sets shows that for young seas, microbreakers and small whitecaps contribute only a small fraction of the total breaking wave dissipation rate. For older seas, we find microbreakers and small whitecaps contribute a large fraction of the breaking wave dissipation rate, but this is only a small fraction of the total dissipation rate, which is now dominated by non-breaking contributions. Hence, for all the wave age conditions observed, microbreakers make an insignificant contribution to the total wave dissipation rate in the wave boundary layer. We tested the sensitivity of the results to the SM15a whitecap analysis methodology by transforming the SM15a breaking data using our breaking crest processing methodology. This resulted in the small-scale breaking waves making an even smaller contribution to the total wave dissipation rate, and so the result is independent of the breaker processing methodology. Comparison with other near-surface total TKE dissipation rate observations also support this conclusion. These contributions to the spectral dissipation rate in ocean wave models are small and need not be explicitly resolved.

  16. A novel inversion method to calculate the mass fraction of coated refractory black carbon using a centrifugal particle mass analyzer and single particle soot photometer

    NASA Astrophysics Data System (ADS)

    Irwin, M.; Broda, K.; Olfert, J. S.; Schill, G. P.; McMeeking, G. R.; Schnitzler, E.; Jäger, W.

    2016-12-01

    Refractory black carbon (rBC) has important atmospheric impacts due to its ability to absorb light, and its interactions with light are partly governed by the acquisition of coatings or other mixing processes. Here, a novel inversion method is presented which derives the mass fraction of coated rBC using a coupled centrifugal particle mass analyzer (CPMA) and single particle soot photometer (SP2). The CPMA selects particles of a known mass-­to-­charge ratio, and the SP2 detects the mass of rBC in each individual particle. The results of the inversion are the simultaneous number distributions of both rBC mass and total particle mass. Practically, the distribution can be integrated to find properties of the total aerosol population, for example, i) mass fraction of coating and ii) mass of coating on a particle of known total mass. This was demonstrated via smog chamber experiments. Initially, particles in the chamber were pure rBC, produced from a methane burner and passed through a diffusion dryer and thermal denuder. An organic (non-rBC) coating was then grown onto the aerosol over several hours via photooxidation with p-xylene. The CPMA-SP2 coupled system sampled the aerosol over the reaction period as the coating grew. The CPMA was sequentially stepped over a mass range from 0.3 to 28 fg and the SP2 measured the mass of rBC in each individual CPMA-classified particle. The number and mass distributions were constructed using the inversion. As expected, the mass and number distributions of rBC and total mass were equivalent for uncoated particles. As the non-rBC coating thickness increased over time, a shift in the number distribution towards higher total mass was observed. At the end of the experiment, fresh rBC (i.e. uncoated, bare particles) was injected into the chamber, creating an external mixture of coated and uncoated particles. This external mixture was clearly resolved in the number distribution of rBC and total particle mass. It is expected that the CPMA-SP2 methodology and inversion technique would be useful for field measurements where the rBC mass fraction, and mixing state of rBC-containing particles, could be accurately measured continuously. This methodology is not limited to evaluating coating mass—unlike SP2 only methods, it gives an unambiguous measure of any non-rBC material mixed with the particle.

  17. Tutorial for Collecting and Processing Images of Composite Structures to Determine the Fiber Volume Fraction

    NASA Technical Reports Server (NTRS)

    Conklin, Lindsey

    2017-01-01

    Fiber-reinforced composite structures have become more common in aerospace components due to their light weight and structural efficiency. In general, the strength and stiffness of a composite structure are directly related to the fiber volume fraction, which is defined as the fraction of fiber volume to total volume of the composite. The most common method to measure the fiber volume fraction is acid digestion, which is a useful method when the total weight of the composite, the fiber weight, and the total weight can easily be obtained. However, acid digestion is a destructive test, so the material will no longer be available for additional characterization. Acid digestion can also be difficult to machine out specific components of a composite structure with complex geometries. These disadvantages of acid digestion led the author to develop a method to calculate the fiber volume fraction. The developed method uses optical microscopy to calculate the fiber area fraction based on images of the cross section of the composite. The fiber area fraction and fiber volume fraction are understood to be the same, based on the assumption that the shape and size of the fibers are consistent in the depth of the composite. This tutorial explains the developed method for optically determining fiber area fraction performed at NASA Langley Research Center.

  18. Data report on variations in the composition of sea ice during MIZEX/East'83 with the Nimbus-7 SMMR

    NASA Technical Reports Server (NTRS)

    Gloersen, P.

    1984-01-01

    Data acquired with the scanning multichannel microwave radiometer (SMMR) on board the Nimbus-7 satellite for a six-week period including the 1983 MIZEX in Fram Strait were analyzed with the use of a previously developed procedure for calculating sea ice concentration, multiyear fraction, and ice temperature. These calculations can compared with independent observations made on the surface and from aircraft in order to check the validity of the calculations based on SMMR data. The calculation of multiyear fraction, which was known earlier to be invalid near the melting point of sea ice, was of particular interest during this period. The indication of multiyear ice was found to disappear a number of times, presumably corresponding to freeze/thaw cycles which occurred in this time period. Both grid-print maps and grey-scale images of total sea ice concentration and multiyear sea ice fraction for the entire period are included.

  19. Co-pyrolysis characteristic of biomass and bituminous coal.

    PubMed

    Li, Shuaidan; Chen, Xueli; Liu, Aibin; Wang, Li; Yu, Guangsuo

    2015-03-01

    Co-pyrolysis characteristics of biomass and bituminous coal have been studied in this work. The temperature was up to 900°C with the heating rates of 10, 15, 20, 25 and 30°C/min. Rice straw, saw dust, microcrystalline cellulose, lignin and Shenfu bituminous coal were chosen as samples. Six different biomass ratios were used. The individual thermal behavior of each sample was obtained. The experimental weight fractions of the blended samples and the calculated values were compared. The results show that the weight fractions of the blended samples behave differently with calculated ones during the co-pyrolysis process. With the increasing biomass ratio, relative deviations between experimental weight fractions and calculated ones are larger. H/C molar ratio, heat transfer properties of biomass would affect to the interaction between biomass and coal. The maximum degradation rates are slower than the calculated ones. The activation energy distributions also changed by adding some biomass into coal. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Aggregation Number in Water/n-Hexanol Molecular Clusters Formed in Cyclohexane at Different Water/n-Hexanol/Cyclohexane Compositions Calculated by Titration 1H NMR.

    PubMed

    Flores, Mario E; Shibue, Toshimichi; Sugimura, Natsuhiko; Nishide, Hiroyuki; Moreno-Villoslada, Ignacio

    2017-11-09

    Upon titration of n-hexanol/cyclohexane mixtures of different molar compositions with water, water/n-hexanol clusters are formed in cyclohexane. Here, we develop a new method to estimate the water and n-hexanol aggregation numbers in the clusters that combines integration analysis in one-dimensional 1 H NMR spectra, diffusion coefficients calculated by diffusion-ordered NMR spectroscopy, and further application of the Stokes-Einstein equation to calculate the hydrodynamic volume of the clusters. Aggregation numbers of 5-15 molecules of n-hexanol per cluster in the absence of water were observed in the whole range of n-hexanol/cyclohexane molar fractions studied. After saturation with water, aggregation numbers of 6-13 n-hexanol and 0.5-5 water molecules per cluster were found. O-H and O-O atom distances related to hydrogen bonds between donor/acceptor molecules were theoretically calculated using density functional theory. The results show that at low n-hexanol molar fractions, where a robust hydrogen-bond network is held between n-hexanol molecules, addition of water makes the intermolecular O-O atom distance shorter, reinforcing molecular association in the clusters, whereas at high n-hexanol molar fractions, where dipole-dipole interactions dominate, addition of water makes the intermolecular O-O atom distance longer, weakening the cluster structure. This correlates with experimental NMR results, which show an increase in the size and aggregation number in the clusters upon addition of water at low n-hexanol molar fractions, and a decrease of these magnitudes at high n-hexanol molar fractions. In addition, water produces an increase in the proton exchange rate between donor/acceptor molecules at all n-hexanol molar fractions.

  1. First-principles investigations of equilibrium Ca, Mg, Si and O isotope fractionations between silicate melts and minerals

    NASA Astrophysics Data System (ADS)

    Qi, Y.; Liu, X.; Kang, J.; He, L.

    2017-12-01

    Equilibrium isotope fractionation factors are essential for using stable isotope data to study many geosciences processes such as planetary differentiation and mantle evolution. The mass-dependent equilibrium isotope fractionation is primarily controlled by the difference in bond energies triggered by the isotope substitution. With the recent advances in computational capabilities, first-principles calculation has become a reliable tool to investigate equilibrium isotopic fractionations, greatly improving our understanding of the factors controlling isotope fractionations. It is important to understand the isotope fractionation between melts and minerals because magmatism is critical for creating and shaping the Earth. However, because isotope fractionation between melts and minerals is small at high temperature, it is difficult to experimentally calibrate such small signature. Due to the disordered and dynamic character of melts, calculations of equilibrium isotope fractionation of melts are more challenging than that for gaseous molecules or minerals. Here, we apply first-principles molecular dynamics method to calculate equilibrium Ca, Mg, Si, and O isotope fractionations between silicate melts and minerals. Our results show that equilibrium Mg, Si, and O isotope fractionations between olivine and pure Mg2SiO4 melt are close to zero at high temperature (e.g. δ26Mgmelt-ol = 0.03 ± 0.04‰, δ30Simelt-ol = -0.06 ± 0.07‰, δ18Omelt-ol = 0.07‰ ± 0.08 at 1500 K). Equilibrium Ca, Mg, Si, and O isotope fractionations between diopside and basalt melt (67% CaMgSi2O6 + 33% CaAl2Si2O8) are also negligible at high temperature (e.g. δ44/40Camelt-cpx = -0.01 ± 0.02‰, δ26Mgmelt-cpx = -0.05 ± 0.14‰, δ30Simelt-cpx = 0.04 ± 0.04‰, δ18Omelt-cpx = 0.03 ± 0.07‰ at 1500 K). These results are consistent with the observations in natural samples that there is no significant Ca, Mg, Si, and O isotope fractionation during mantle partial melting, demonstrating the reliability of our methods. Thus, our results can be used to understand stable isotope fractionation during partial melting of mantle peridotite or fractional crystallization during magmatic differentiation. The first-principles molecular dynamics method is a promising tool to obtain equilibrium fractionation of more isotope systems for complicate liquids.

  2. Cognitive Profiles Associated with Responsiveness to Fraction Intervention

    ERIC Educational Resources Information Center

    Krowka, Sarah K.; Fuchs, Lynn S.

    2017-01-01

    This study examined differences in cognitive processing between 4th-grade students who respond adequately, as opposed to inadequately, to intervention on 3 fraction outcomes: number-line estimation, calculation, and word problems. Students were assessed on 7 cognitive processes and on the 3 fraction outcomes. Students were grouped as adequate or…

  3. The Future of Fractions

    ERIC Educational Resources Information Center

    Usiskin, Zalman P.

    2007-01-01

    In the 1970s, the movement to the metric system (which has still not completely occurred in the United States) and the advent of hand-held calculators led some to speculate that decimal representation of numbers would render fractions obsolete. This provocative proposition stimulated Zalman Usiskin to write "The Future of Fractions" in 1979. He…

  4. A novel method for flow pattern identification in unstable operational conditions using gamma ray and radial basis function.

    PubMed

    Roshani, G H; Nazemi, E; Roshani, M M

    2017-05-01

    Changes of fluid properties (especially density) strongly affect the performance of radiation-based multiphase flow meter and could cause error in recognizing the flow pattern and determining void fraction. In this work, we proposed a methodology based on combination of multi-beam gamma ray attenuation and dual modality densitometry techniques using RBF neural network in order to recognize the flow regime and determine the void fraction in gas-liquid two phase flows independent of the liquid phase changes. The proposed system is consisted of one 137 Cs source, two transmission detectors and one scattering detector. The registered counts in two transmission detectors were used as the inputs of one primary Radial Basis Function (RBF) neural network for recognizing the flow regime independent of liquid phase density. Then, after flow regime identification, three RBF neural networks were utilized for determining the void fraction independent of liquid phase density. Registered count in scattering detector and first transmission detector were used as the inputs of these three RBF neural networks. Using this simple methodology, all the flow patterns were correctly recognized and the void fraction was predicted independent of liquid phase density with mean relative error (MRE) of less than 3.28%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. SU-E-T-385: 4D Radiobiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fourkal, E; Hossain, M; Veltchev, I

    2014-06-01

    Purpose: The linear-quadratic model is the most prevalent model for planning dose fractionation in radiation therapy in the low dose per fraction regimens. However for high-dose fractions, used in SRS/SBRT/HDR treatments the LQ model does not yield accurate predictions, due to neglecting the reduction in the number of sublethal lesions as a result of their conversion to lethal lesions with subsequent irradiation. Proper accounting for this reduction in the number of sublethally damaged lesions leads to the dependence of the survival fraction on the temporal structure of the dose. The main objective of this work is to show that themore » functional dependence of the dose rate on time in each voxel is an important additional factor that can significantly influence the TCP. Methods: Two SBRT lung plans have been used to calculate the TCPs for the same patient. One plan is a 3D conformal plan and the other is an IMRT plan. Both plans are normalized so that 99.5% of PTV volume receives the same prescription dose of 50 Gy in 5 fractions. The dose rate in each individual voxel is calculated as a function of treatment time and subsequently used in the calculation of TCP. Results: The calculated TCPs show that shorter delivery times lead to greater TCP, despite all delivery times being short compared to the repair half-time for sublethal lesions. Furthermore, calculated TCP(IMRT) =0.308 for the IMRT plan is smaller than TCP(3D) =0.425 for 3D conformal, even though it shows greater tumor hot spots and equal PTV coverage. The calculated TCPs are considerably lower compared to those based on the LQ model for which TCP=1 for both plans. Conclusion: The functional dependence of the voxel-by-voxel dose rate on time may be an important factor in predicting the treatment outcome and cannot be neglected in radiobiological modeling.« less

  6. Thermal and mass implications of magmatic evolution in the Lassen volcanic region, California, and minimum constraints on basalt influx to the lower crust

    USGS Publications Warehouse

    Guffanti, M.; Clynne, M.A.; Muffler, L.J.P.

    1996-01-01

    We have analyzed the heat and mass demands of a petrologic model of basaltdriven magmatic evolution in which variously fractionated mafic magmas mix with silicic partial melts of the lower crust. We have formulated steady state heat budgets for two volcanically distinct areas in the Lassen region: the large, late Quaternary, intermediate to silicic Lassen volcanic center and the nearby, coeval, less evolved Caribou volcanic field. At Caribou volcanic field, heat provided by cooling and fractional crystallization of 52 km3 of basalt is more than sufficient to produce 10 km3 of rhyolitic melt by partial melting of lower crust. Net heat added by basalt intrusion at Caribou volcanic field is equivalent to an increase in lower crustal heat flow of ???7 mW m-2, indicating that the field is not a major crustal thermal anomaly. Addition of cumulates from fractionation is offset by removal of erupted partial melts. A minimum basalt influx of 0.3 km3 (km2 Ma)-1 is needed to supply Caribou volcanic field. Our methodology does not fully account for an influx of basalt that remains in the crust as derivative intrusives. On the basis of comparison to deep heat flow, the input of basalt could be ???3 to 7 times the amount we calculate. At Lassen volcanic center, at least 203 km3 of mantle-derived basalt is needed to produce 141 km3 of partial melt and drive the volcanic system. Partial melting mobilizes lower crustal material, augmenting the magmatic volume available for eruption at Lassen volcanic center; thus the erupted volume of 215 km3 exceeds the calculated basalt input of 203 km3. The minimum basalt input of 1.6 km3 (km2 Ma)-1 is >5 times the minimum influx to the Caribou volcanic field. Basalt influx high enough to sustain considerable partial melting, coupled with locally high extension rate, is a crucial factor in development of Lassen volcanic center; in contrast. Caribou volcanic field has failed to develop into a large silicic center primarily because basalt supply there has been insufficient.

  7. 78 FR 25116 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Liquidity Factor of CME's CDS Margin Methodology April 23, 2013. Pursuant to Section 19(b)(1) of the... additions; bracketed text indicates deletions. * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the sum of two components: (1) A concentration charge...

  8. 77 FR 77160 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... Liquidity Factor of CME's CDS Margin Methodology December 21, 2012. Pursuant to Section 19(b)(1) of the.... * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the... Liquidity Factor using the current Gross Notional Function with the following modifications: (1) the...

  9. Branching ratio and polarization of B→ρ(ω)ρ(ω) decays in perturbative QCD approach

    NASA Astrophysics Data System (ADS)

    Li, Ying; Lü, Cai-Dian

    2006-01-01

    In this work, we calculate the branching ratios, polarization fractions and CP asymmetry parameters of decay modes B→ρ(ω)ρ(ω) in the perturbative QCD approach, which is based on kT factorization. After calculation, we find that the branching ratios of B0→ρ+ρ-, B+→ρ+ρ0, and B+→ρ+ω are at the order of 10-5, and their longitudinal polarization fractions are more than 90%. The above results agree with BaBar’s measurements. We also calculate the branching ratios and polarization fractions of B0→ρ0ρ0, B0→ρ0ω, and B0→ωω decays. We find that their longitudinal polarization fractions are suppressed to 60-80% due to a small color suppressed tree contribution. The dominant penguin and nonfactorization tree contributions equally contribute to the longitudinal and transverse polarization, which will be tested in the future experiments. We predict the CP asymmetry of B0→ρ+ρ- and B+→ρ+ρ0, which will be measured in B factories.

  10. FRAC-IN-THE-BOX utilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, D.G.; West, J.T.

    FRAC-IN-THE-BOX is a computer code developed to calculate the fractions of rectangular parallelepiped mesh cell volumes that are intersected by combinatorial geometry type zones. The geometry description used in the code is a subset of the combinatorial geometry used in SABRINA. The input file may be read into SABRINA and three dimensional plots made of the input geometry. The volume fractions for those portions of the geometry that are too complicated to describe with the geometry routines provided in FRAC-IN-THE-BOX may be calculated in SABRINA and merged with the volume fractions computed for the remainder of the geometry. 21 figs.,more » 1 tab.« less

  11. Chaos Suppression in Fractional order Permanent Magnet Synchronous Generator in Wind Turbine Systems

    NASA Astrophysics Data System (ADS)

    Rajagopal, Karthikeyan; Karthikeyan, Anitha; Duraisamy, Prakash

    2017-06-01

    In this paper we investigate the control of three-dimensional non-autonomous fractional-order uncertain model of a permanent magnet synchronous generator (PMSG) via a adaptive control technique. We derive a dimensionless fractional order model of the PMSM from the integer order presented in the literatures. Various dynamic properties of the fractional order model like eigen values, Lyapunov exponents, bifurcation and bicoherence are investigated. The system chaotic behavior for various orders of fractional calculus are presented. An adaptive controller is derived to suppress the chaotic oscillations of the fractional order model. As the direct Lyapunov stability analysis of the robust controller is difficult for a fractional order first derivative, we have derived a new lemma to analyze the stability of the system. Numerical simulations of the proposed chaos suppression methodology are given to prove the analytical results derived through which we show that for the derived adaptive controller and the parameter update law, the origin of the system for any bounded initial conditions is asymptotically stable.

  12. A Statistical Treatment of Bioassay Pour Fractions

    NASA Technical Reports Server (NTRS)

    Barengoltz, Jack; Hughes, David W.

    2014-01-01

    The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(

  13. Estimation's Role in Calculations with Fractions

    ERIC Educational Resources Information Center

    Johanning, Debra I.

    2011-01-01

    Estimation is more than a skill or an isolated topic. It is a thinking tool that needs to be emphasized during instruction so that students will learn to develop algorithmic procedures and meaning for fraction operations. For students to realize when fractions should be added, subtracted, multiplied, or divided, they need to develop a sense of…

  14. On stability of fixed points and chaos in fractional systems.

    PubMed

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0<α<2. The method is tested on various forms of fractional generalizations of the standard and logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.

  15. White wines aroma recovery and enrichment: Sensory-led aroma selection and consumer perception.

    PubMed

    Lezaeta, Alvaro; Bordeu, Edmundo; Agosin, Eduardo; Pérez-Correa, J Ricardo; Varela, Paula

    2018-06-01

    We developed a sensory-based methodology to aromatically enrich wines using different aromatic fractions recovered during fermentations of Sauvignon Blanc must. By means of threshold determination and generic descriptive analysis using a trained sensory panel, the aromatic fractions were characterized, selected, and clustered. The selected fractions were grouped, re-assessed, and validated by the trained panel. A consumer panel assessed overall liking and answered a CATA question on some enriched wines and their ideal sample. Differences in elicitation rates between non-enriched and enriched wines with respect to the ideal product highlighted product optimization and the role of aromatic enrichment. Enrichment with aromatic fractions increased the aromatic quality of wines and enhanced consumer appreciation. Copyright © 2018. Published by Elsevier Ltd.

  16. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  17. Fractional representation theory - Robustness results with applications to finite dimensional control of a class of linear distributed systems

    NASA Technical Reports Server (NTRS)

    Nett, C. N.; Jacobson, C. A.; Balas, M. J.

    1983-01-01

    This paper reviews and extends the fractional representation theory. In particular, new and powerful robustness results are presented. This new theory is utilized to develop a preliminary design methodology for finite dimensional control of a class of linear evolution equations on a Banach space. The design is for stability in an input-output sense, but particular attention is paid to internal stability as well.

  18. Collisional Ionization Equilibrium for Optically Thin Plasmas

    NASA Technical Reports Server (NTRS)

    Bryans, P.; Mitthumsiri, W.; Savin, D. W.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.

    2006-01-01

    Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have utilized state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn. We have also utilized state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Mazzotta et al. (1998), we have calculated improved collisional ionization equilibrium calculations. We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni, and with the fractional abundances derived from the modern DR and RR calculations of Gu (2003a,b, 2004) for Mg, Si, S, Ar, Ca, Fe, and Ni.

  19. Sorption of the pharmaceuticals carbamazepine and naproxen to dissolved organic matter: role of structural fractions.

    PubMed

    Maoz, Adi; Chefetz, Benny

    2010-02-01

    Pharmaceutical compounds and dissolved organic matter (DOM) are co-introduced into the environment by irrigation with reclaimed wastewater and/or application of biosolids. In this study, we evaluate the role and mechanism of interaction of the pharmaceuticals naproxen and carbamazepine with structural fractions of biosolids-derived DOM. Sorption interactions were estimated from dialysis-bag experiments at different pHs. Sorption of naproxen and carbamazepine by the hydrophobic acid fraction exhibited strong pH-dependence. With both pharmaceuticals, the highest sorption coefficients (K(DOC)) were at pH 4. With the hydrophobic neutral fraction, pH affected only naproxen sorption (decreasing with increasing pH). Among the hydrophilic DOM fractions, the hydrophilic acid fraction exhibited the highest K(DOC) value for carbamazepine, probably due to their bipolar character. In the hydrophilic acid fraction-naproxen system, significant anionic repulsion was observed with increasing pH. The hydrophilic base fraction contains positively charged functional groups. Therefore with increasing ionization of naproxen (with increasing pH), K(DOC) to this fraction increased. The hydrophilic neutral fraction exhibited the lowest K(DOC) with both studied pharmaceuticals. The K(DOC) value of carbamazepine with the bulk DOM sample was higher than the calculated K(DOC) value based on sorption by the individual isolated fractions. The opposite trend was observed with naproxen at pH 8: the calculated K(DOC) value was higher than the value obtained for the bulk DOM. These results demonstrate that DOM fractions interact with each other and do not act as separate sorption domains. (c) 2009 Elsevier Ltd. All rights reserved.

  20. Method for beam hardening correction in quantitative computed X-ray tomography

    NASA Technical Reports Server (NTRS)

    Yan, Chye Hwang (Inventor); Whalen, Robert T. (Inventor); Napel, Sandy (Inventor)

    2001-01-01

    Each voxel is assumed to contain exactly two distinct materials, with the volume fraction of each material being iteratively calculated. According to the method, the spectrum of the X-ray beam must be known, and the attenuation spectra of the materials in the object must be known, and be monotonically decreasing with increasing X-ray photon energy. Then, a volume fraction is estimated for the voxel, and the spectrum is iteratively calculated.

  1. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, Walter G.; Kikta, Thomas J.

    1998-01-01

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.

  2. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, W.G.; Kikta, T.J.

    1998-03-17

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction. 9 figs.

  3. Hollow-fiber flow field-flow fractionation and multi-angle light scattering investigation of the size, shape and metal-release of silver nanoparticles in aqueous medium for nano-risk assessment.

    PubMed

    Marassi, Valentina; Casolari, Sonia; Roda, Barbara; Zattoni, Andrea; Reschiglian, Pierluigi; Panzavolta, Silvia; Tofail, Syed A M; Ortelli, Simona; Delpivo, Camilla; Blosi, Magda; Costa, Anna Luisa

    2015-03-15

    Due to the increased use of silver nanoparticles in industrial scale manufacturing, consumer products and nanomedicine reliable measurements of properties such as the size, shape and distribution of these nano particles in aqueous medium is critical. These properties indeed affect both functional properties and biological impacts especially in quantifying associated risks and identifying suitable risk-mediation strategies. The feasibility of on-line coupling of a fractionation technique such as hollow-fiber flow field flow fractionation (HF5) with a light scattering technique such as MALS (multi-angle light scattering) is investigated here for this purpose. Data obtained from such a fractionation technique and its combination thereof with MALS have been compared with those from more conventional but often complementary techniques e.g. transmission electron microscopy, dynamic light scattering, atomic absorption spectroscopy, and X-ray fluorescence. The combination of fractionation and multi angle light scattering techniques have been found to offer an ideal, hyphenated methodology for a simultaneous size-separation and characterization of silver nanoparticles. The hydrodynamic radii determined by fractionation techniques can be conveniently correlated to the mean average diameters determined by multi angle light scattering and reliable information on particle morphology in aqueous dispersion has been obtained. The ability to separate silver (Ag(+)) ions from silver nanoparticles (AgNPs) via membrane filtration during size analysis is an added advantage in obtaining quantitative insights to its risk potential. Most importantly, the methodology developed in this article can potentially be extended to similar characterization of metal-based nanoparticles when studying their functional effectiveness and hazard potential. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Carbon dioxide fluid-flow modeling and injectivity calculations

    USGS Publications Warehouse

    Burke, Lauri

    2011-01-01

    These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.

  5. The Effects of Core Composition on Iron Isotope Fractionation During Planetary Differentiation

    NASA Astrophysics Data System (ADS)

    Elardo, S. M.; Shahar, A.; Caracas, R.; Mock, T. D.; Sio, C. K. I.

    2018-05-01

    High pressure and temperature isotope exchange experiments and density functional theory calculations show how the composition of planetary cores affects the fractionation of iron isotopes during planetary differentiation.

  6. A distorted-wave methodology for electron-ion impact excitation - Calculation for two-electron ions

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Temkin, A.

    1977-01-01

    A distorted-wave program is being developed for calculating the excitation of few-electron ions by electron impact. It uses the exchange approximation to represent the exact initial-state wavefunction in the T-matrix expression for the excitation amplitude. The program has been implemented for excitation of the 2/1,3/(S,P) states of two-electron ions. Some of the astrophysical applications of these cross sections as well as the motivation and requirements of the calculational methodology are discussed.

  7. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  8. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  9. The evolving intergalactic medium - The uncollapsed baryon fraction in a cold dark matter universe

    NASA Technical Reports Server (NTRS)

    Shapiro, Paul R.; Giroux, Mark L.; Babul, Arif

    1991-01-01

    The time-varying density of the intergalactic medium (IGM) is calculated by coupling detailed numerical calculations of the thermal and ionization balance and radiative transfer in a uniform IGM of H and He to the linearized equations for the growth of density fluctuations in both gases and a dark component in a cold dark matter universe. The IGM density is identified with the collapsed baryon fraction. It is found that even if the IGM is never reheated, a significant fraction of the baryons remain uncollapsed at redshifts of four. If instead the collapsed fraction releases enough ionizing radiation or thermal energy to reionize the IGM by z greater than four as required by the Gunn-Peterson (GP) constraint, the uncollapsed fraction at z of four is even higher. The known quasar distribution is insufficient to supply the ionizing radiation necessary to satisfy the GP constraint in this case and, if stars are instead responsible, a substantial metallicity must have been produced by z of four.

  10. Identification of miRNA-103 in the Cellular Fraction of Human Peripheral Blood as a Potential Biomarker for Malignant Mesothelioma – A Pilot Study

    PubMed Central

    Weber, Daniel G.; Johnen, Georg; Bryk, Oleksandr; Jöckel, Karl-Heinz; Brüning, Thomas

    2012-01-01

    Background To date, no biomarkers with reasonable sensitivity and specificity for the early detection of malignant mesothelioma have been described. The use of microRNAs (miRNAs) as minimally-invasive biomarkers has opened new opportunities for the diagnosis of cancer, primarily because they exhibit tumor-specific expression profiles and have been commonly observed in blood of both cancer patients and healthy controls. The aim of this pilot study was to identify miRNAs in the cellular fraction of human peripheral blood as potential novel biomarkers for the detection of malignant mesothelioma. Methodology/Principal Findings Using oligonucleotide microarrays for biomarker identification the miRNA levels in the cellular fraction of human peripheral blood of mesothelioma patients and asbestos-exposed controls were analyzed. Using a threefold expression change in combination with a significance level of p<0.05, miR-103 was identified as a potential biomarker for malignant mesothelioma. Quantitative real-time PCR (qRT-PCR) was used for validation of miR-103 in 23 malignant mesothelioma patients, 17 asbestos-exposed controls, and 25 controls from the general population. For discrimination of mesothelioma patients from asbestos-exposed controls a sensitivity of 83% and a specificity of 71% were calculated, and for discrimination of mesothelioma patients from the general population a sensitivity of 78% and a specificity of 76%. Conclusions/Significance The results of this pilot study show that miR-103 is characterized by a promising sensitivity and specificity and might be a potential minimally-invasive biomarker for the diagnosis of mesothelioma. In addition, our results support the concept of using the cellular fraction of human blood for biomarker discovery. However, for early detection of malignant mesothelioma the feasibility of miR-103 alone or in combination with other biomarkers needs to be analyzed in a prospective study. PMID:22253921

  11. Using force-based adaptive resolution simulations to calculate solvation free energies of amino acid sidechain analogues

    NASA Astrophysics Data System (ADS)

    Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.

    2017-06-01

    The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.

  12. A holistic high-throughput screening framework for biofuel feedstock assessment that characterises variations in soluble sugars and cell wall composition in Sorghum bicolor

    PubMed Central

    2013-01-01

    Background A major hindrance to the development of high yielding biofuel feedstocks is the ability to rapidly assess large populations for fermentable sugar yields. Whilst recent advances have outlined methods for the rapid assessment of biomass saccharification efficiency, none take into account the total biomass, or the soluble sugar fraction of the plant. Here we present a holistic high-throughput methodology for assessing sweet Sorghum bicolor feedstocks at 10 days post-anthesis for total fermentable sugar yields including stalk biomass, soluble sugar concentrations, and cell wall saccharification efficiency. Results A mathematical method for assessing whole S. bicolor stalks using the fourth internode from the base of the plant proved to be an effective high-throughput strategy for assessing stalk biomass, soluble sugar concentrations, and cell wall composition and allowed calculation of total stalk fermentable sugars. A high-throughput method for measuring soluble sucrose, glucose, and fructose using partial least squares (PLS) modelling of juice Fourier transform infrared (FTIR) spectra was developed. The PLS prediction was shown to be highly accurate with each sugar attaining a coefficient of determination (R 2 ) of 0.99 with a root mean squared error of prediction (RMSEP) of 11.93, 5.52, and 3.23 mM for sucrose, glucose, and fructose, respectively, which constitutes an error of <4% in each case. The sugar PLS model correlated well with gas chromatography–mass spectrometry (GC-MS) and brix measures. Similarly, a high-throughput method for predicting enzymatic cell wall digestibility using PLS modelling of FTIR spectra obtained from S. bicolor bagasse was developed. The PLS prediction was shown to be accurate with an R 2 of 0.94 and RMSEP of 0.64 μg.mgDW-1.h-1. Conclusions This methodology has been demonstrated as an efficient and effective way to screen large biofuel feedstock populations for biomass, soluble sugar concentrations, and cell wall digestibility simultaneously allowing a total fermentable yield calculation. It unifies and simplifies previous screening methodologies to produce a holistic assessment of biofuel feedstock potential. PMID:24365407

  13. Modeling nuclear field shift isotope fractionation in crystals

    NASA Astrophysics Data System (ADS)

    Schauble, E. A.

    2013-12-01

    In this study nuclear field shift fractionations in solids (and chemically similar liquids) are estimated using calibrated density functional theory calculations. The nuclear field shift effect is a potential driver of mass independent isotope fractionation(1,2), especially for elements with high atomic number such as Hg, Tl and U. This effect is caused by the different shapes and volumes of isotopic nuclei, and their interactions with electronic structures and energies. Nuclear field shift isotope fractionations can be estimated with first principles methods, but the calculations are computationally difficult, limiting most theoretical studies so far to small gas-phase molecules and molecular clusters. Many natural materials of interest are more complex, and it is important to develop ways to estimate field shift effects that can be applied to minerals, solutions, in biomolecules, and at mineral-solution interfaces. Plane-wave density functional theory, in combination with the projector augmented wave method (DFT-PAW), is much more readily adapted to complex materials than the relativistic all-electron calculations that have been the focus of most previous studies. DFT-PAW is a particularly effective tool for studying crystals with periodic boundary conditions, and may also be incorporated into molecular dynamics simulations of solutions and other disordered phases. Initial calibrations of DFT-PAW calculations against high-level all-electron models of field shift fractionation suggest that there may be broad applicability of this method to a variety of elements and types of materials. In addition, the close relationship between the isomer shift of Mössbauer spectroscopy and the nuclear field shift isotope effect makes it possible, at least in principle, to estimate the volume component of field shift fractionations in some species that are too complex even for DFT-PAW models, so long as there is a Mössbauer isotope for the element of interest. Initial results will be presented for calculations of liquid-vapor fractionation of cadmium and mercury, which indicate an affinity for heavy isotopes in the liquid phase. In the case of mercury the results match well with recent experiments. Mössbauer-calibrated fractionation factors will also be presented for tin and platinum species. Platinum isotope behaviour in metals appears to particularly interesting, with very distinct isotope partitioning behaviour for iron-rich alloys, relative to pure platinum metal. References: 1) Bigeleisen, J. (1996) J. Am. Chem. Soc. 118, 3676-3680. 2) Nomura, M., Higuchi, N., Fujii, Y. (1996) J. Am. Chem. Soc. 118, 9127-9130.

  14. AGR-3/4 Irradiation Test Predictions using PARFUME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skerjanc, William Frances; Collin, Blaise Paul

    2016-03-01

    PARFUME, a fuel performance modeling code used for high temperature gas reactors, was used to model the AGR-3/4 irradiation test using as-run physics and thermal hydraulics data. The AGR-3/4 test is the combined third and fourth planned irradiations of the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. The AGR-3/4 test train consists of twelve separate and independently controlled and monitored capsules. Each capsule contains four compacts filled with both uranium oxycarbide (UCO) unaltered “driver” fuel particles and UCO designed-to-fail (DTF) fuel particles. The DTF fraction was specified to be 1×10-2. This report documents the calculations performed to predictmore » failure probability of TRISO-coated fuel particles during the AGR-3/4 experiment. In addition, this report documents the calculated source term from both the driver fuel and DTF particles. The calculations include the modeling of the AGR-3/4 irradiation that occurred from December 2011 to April 2014 in the Advanced Test Reactor (ATR) over a total of ten ATR cycles including seven normal cycles, one low power cycle, one unplanned outage cycle, and one Power Axial Locator Mechanism cycle. Results show that failure probabilities are predicted to be low, resulting in zero fuel particle failures per capsule. The primary fuel particle failure mechanism occurred as a result of localized stresses induced by the calculated IPyC cracking. Assuming 1,872 driver fuel particles per compact, failure probability calculated by PARFUME leads to no predicted particle failure in the AGR-3/4 driver fuel. In addition, the release fraction of fission products Ag, Cs, and Sr were calculated to vary depending on capsule location and irradiation temperature. The maximum release fraction of Ag occurs in Capsule 7 reaching up to 56% for the driver fuel and 100% for the DTF fuel. The release fraction of the other two fission products, Cs and Sr, are much smaller and in most cases less than 1% for the driver fuel. The notable exception occurs in Capsule 7 where the release fraction for Cs and Sr reach up to 0.73% and 2.4%, respectively, for the driver fuel. For the DTF fuel in Capsule 7, the release fraction for Cs and Sr are estimated to be 100% and 5%, respectively.« less

  15. Self-registering spread-spectrum barcode method

    DOEpatents

    Cummings, Eric B.; Even Jr., William R.

    2004-11-09

    A novel spread spectrum barcode methodology is disclosed that allows a barcode to be read in its entirety even when a significant fraction or majority of the barcode is obscured. The barcode methodology makes use of registration or clocking information that is distributed along with the encoded user data across the barcode image. This registration information allows for the barcode image to be corrected for imaging distortion such as zoom, rotation, tilt, curvature, and perspective.

  16. Modeling Radioactive Decay Chains with Branching Fraction Uncertainties

    DTIC Science & Technology

    2013-03-01

    moments methods with transmutation matrices. Uncertainty from both half-lives and branching fractions is carried through these calculations by Monte...moment methods, method for sampling from normal distributions for half- life uncertainty, and use of transmutation matrices were leveraged. This...distributions for half-life and branching fraction uncertainties, building decay chains and generating the transmutation matrix (T-matrix

  17. Relationship between intracellular ice formation in oocytes of the mouse and Xenopus and the physical state of the external medium--a revisit.

    PubMed

    Mazur, Peter; Kleinhans, F W

    2008-02-01

    We have previously reported that intracellular ice formation (IIF) in mouse oocytes suspended in glycerol/PBS solutions or ethylene glycol (EG)/PBS solutions and rapidly cooled to -50 degrees C or below occurs at temperatures where a critical fraction of the external water remains unfrozen [P. Mazur, S. Seki, I.L. Pinn, F.W. Kleinhans, K. Edashige, Extra- and intracellular ice formation in mouse oocytes, Cryobiology 51 (2005) 29-53; P. Mazur, I.L. Pinn, F.W. Kleinhans, The temperature of intracellular ice formation in mouse oocytes vs. the unfrozen fraction at that temperature, Cryobiology 54 (2007) 223-233]. For mouse oocytes in PBS or glycerol/PBS that fraction is 0.06; for oocytes in EG that fraction was calculated to be 0.13, more than double. The fractions unfrozen are computed from ternary phase diagrams. In the previous publication, we used the EG data of Woods et al. [E.J. Woods, M.A.J. Zieger, D.Y. Gao, J.K. Critser, Equations for obtaining melting points for the ternary system ethylene glycol/sodium chloride/Water and their application to cryopreservation., Cryobiology 38 (1999) 403-407]. Since then, we have determined that ternary phase diagrams for EG/NaCl/water synthesized by summing binary phase data for EG/water NaCl/water gives substantially different curves, which seem more realistic [F.W. Kleinhans, P. Mazur, Comparison of actual vs. synthesized ternary phase diagrams for solutes of cryobiological interest, Cryobiology 54 (2007) 212-222]. Unfrozen fractions at the temperatures of IIF computed from these synthesized phase diagrams are about half of those calculated from the Woods et al. data, and are in close agreement with the computations for glycerol; i.e., IIF occurs when about 92-94% of the external water is frozen. A parallel paper was published by Guenther et al. [J.F. Guenther, S. Seki, F.W. Kleinhans, K. Edashige, D.M. Roberts, P. Mazur, Extra-and intra-cellular ice formation in Stage I and II Xenopus laevis oocytes, Cryobiology 52 (2006) 401-416] on IIF in oocytes of the frog Xenopus. It too examined whether the temperatures of IIF were related to the unfrozen fractions at those temperatures. It also used the Woods et al. ternary phase data to calculate the unfrozen fractions for EG solutions. As reported here, once again the values of these unfrozen fractions are substantially different from those calculated using synthesized phase diagrams. With the latter, the unfrozen fractions at IIF become very similar for EG and glycerol.

  18. First-principles investigation of vanadium isotope fractionation in solution and during adsorption

    NASA Astrophysics Data System (ADS)

    Wu, Fei; Qin, Tian; Li, Xuefang; Liu, Yun; Huang, Jen-How; Wu, Zhongqing; Huang, Fang

    2015-09-01

    Equilibrium fractionation factors of vanadium (V) isotopes among tri- (V(III)), tetra- (V(IV)) and penta-valent (V(V)) inorganic V species in aqueous system and during adsorption of V(V) to goethite are estimated using first-principles calculation. Our results highlight the dependence of V isotope fractionation on valence states and the chemical binding environment. The heavy V isotope (51V) is enriched in the main V species following a sequence of V(III) < V(IV) < V(V). According to our calculations, at 25 °C, the equilibrium isotope fractionation factor between [V5+O2(OH)2]- and [V4+O(H2O)5]2+ (ln ⁡α V (V)- V (IV)) is 3.9‰, and the equilibrium isotope fractionation factor between [V5+O2(OH)2]- and [V3+(OH)3(H2O)3] (ln ⁡α V (V)- V (III)) is 6.4‰. In addition, isotope fractionation between +5 valence species [V5+O2(OH)2]- and [V5+O2(H2O)4]+ is 1.5‰ at 25 °C, which is caused by their different bond lengths and coordination numbers (CN). Theoretical calculations also show that light V isotope (50V) is preferentially adsorbed on the surface of goethite. Our work reveals that V isotopes can be significantly fractionated in the Earth's surface environments due to redox reaction and mineral adsorption, indicating that V isotope data can be used to monitor toxic V(V) attenuation processes through reduction or adsorption in natural water systems. In addition, a simple mass balance model suggests that V isotope composition of seawater might vary with change of ambient oxygen levels. Thus our theoretical investigations imply a promising future for V isotopes as a potential new paleo-redox tracer.

  19. Microscopic potential fluctuations in Si-doped AlGaN epitaxial layers with various AlN molar fractions and Si concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurai, Satoshi, E-mail: kurai@yamaguchi-u.ac.jp; Yamada, Yoichi; Miyake, Hideto

    2016-01-14

    Nanoscopic potential fluctuations of Si-doped AlGaN epitaxial layers with the AlN molar fraction varying from 0.42 to 0.95 and Si-doped Al{sub 0.61}Ga{sub 0.39}N epitaxial layers with Si concentrations of 3.0–37 × 10{sup 17 }cm{sup −3} were investigated by cathodoluminescence (CL) imaging combined with scanning electron microscopy. The spot CL linewidths of AlGaN epitaxial layers broadened as the AlN molar fraction was increased to 0.7, and then narrowed at higher AlN molar fractions. The experimental linewidths were compared with the theoretical prediction from the alloy broadening model. The trends displayed by our spot CL linewidths were consistent with calculated results at AlN molar fractionsmore » of less than about 0.60, but the spot CL linewidths were markedly broader than the calculated linewidths at higher AlN molar fractions. The dependence of the difference between the spot CL linewidth and calculated line broadening on AlN molar fraction was found to be similar to the dependence of reported S values, indicating that the vacancy clusters acted as the origin of additional line broadening at high AlN molar fractions. The spot CL linewidths of Al{sub 0.61}Ga{sub 0.39}N epitaxial layers with the same Al concentration and different Si concentrations were nearly constant in the entire Si concentration range tested. From the comparison of reported S values, the increase of V{sub Al} did not contribute to the linewidth broadening, unlike the case of the V{sub Al} clusters.« less

  20. Study of water based nanofluid flows in annular tubes using numerical simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Siadaty, Moein; Kazazi, Mohsen

    2018-04-01

    Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.

  1. The multi-scattering model for calculations of positron spatial distribution in the multilayer stacks, useful for conventional positron measurements

    NASA Astrophysics Data System (ADS)

    Dryzek, Jerzy; Siemek, Krzysztof

    2013-08-01

    The spatial distribution of positrons emitted from radioactive isotopes into stacks or layered samples is a subject of the presented report. It was found that Monte Carlo (MC) simulations using GEANT4 code are not able to describe correctly the experimental data of the positron fractions in stacks. The mathematical model was proposed for calculations of the implantation profile or positron fractions in separated layers or foils being components of a stack. The model takes into account only two processes, i.e., the positron absorption and backscattering at interfaces. The mathematical formulas were applied in the computer program called LYS-1 (layers profile analysis). The theoretical predictions of the model were in the good agreement with the results of the MC simulations for the semi infinite sample. The experimental verifications of the model were performed on the symmetrical and non-symmetrical stacks of different foils. The good agreement between the experimental and calculated fractions of positrons in components of a stack was achieved. Also the experimental implantation profile obtained using the depth scanning of positron implantation technique is very well described by the theoretical profile obtained within the proposed model. The LYS-1 program allows us also to calculate the fraction of positrons which annihilate in the source, which can be useful in the positron spectroscopy.

  2. Spectral fractionation detection of gold nanorod contrast agents using optical coherence tomography

    PubMed Central

    Jia, Yali; Liu, Gangjun; Gordon, Andrew Y.; Gao, Simon S.; Pechauer, Alex D.; Stoddard, Jonathan; McGill, Trevor J.; Jayagopal, Ashwath; Huang, David

    2015-01-01

    We demonstrate the proof of concept of a novel Fourier-domain optical coherence tomography contrast mechanism using gold nanorod contrast agents and a spectral fractionation processing technique. The methodology detects the spectral shift of the backscattered light from the nanorods by comparing the ratio between the short and long wavelength halves of the optical coherence tomography signal intensity. Spectral fractionation further divides the halves into sub-bands to improve spectral contrast and suppress speckle noise. Herein, we show that this technique can detect gold nanorods in intralipid tissue phantoms. Furthermore, cellular labeling by gold nanorods was demonstrated using retinal pigment epithelial cells in vitro. PMID:25836459

  3. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  4. 40 CFR 98.247 - Records that must be retained.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tier 4 Calculation Methodology in § 98.37. (b) If you comply with the mass balance methodology in § 98... with § 98.243(c)(4). (2) Start and end times and calculated carbon contents for time periods when off... determining carbon content of feedstock or product. (3) A part of the monitoring plan required under § 98.3(g...

  5. Decimals, Denominators, Demons, Calculators, and Connections

    ERIC Educational Resources Information Center

    Sparrow, Len; Swan, Paul

    2005-01-01

    The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…

  6. Tomographic imaging of non-local media based on space-fractional diffusion models

    NASA Astrophysics Data System (ADS)

    Buonocore, Salvatore; Semperlotti, Fabio

    2018-06-01

    We investigate a generalized tomographic imaging framework applicable to a class of inhomogeneous media characterized by non-local diffusive energy transport. Under these conditions, the transport mechanism is well described by fractional-order continuum models capable of capturing anomalous diffusion that would otherwise remain undetected when using traditional integer-order models. Although the underlying idea of the proposed framework is applicable to any transport mechanism, the case of fractional heat conduction is presented as a specific example to illustrate the methodology. By using numerical simulations, we show how complex inhomogeneous media involving non-local transport can be successfully imaged if fractional order models are used. In particular, results will show that by properly recognizing and accounting for the fractional character of the host medium not only allows achieving increased resolution but, in case of strong and spatially distributed non-locality, it represents the only viable approach to achieve a successful reconstruction.

  7. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    NASA Astrophysics Data System (ADS)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  8. A glimpse beneath Antarctic sea ice: observation of platelet-layer thickness and ice-volume fraction with multi-frequency EM

    NASA Astrophysics Data System (ADS)

    Hendricks, S.; Hoppmann, M.; Hunkeler, P. A.; Kalscheuer, T.; Gerdes, R.

    2015-12-01

    In Antarctica, ice crystals (platelets) form and grow in supercooled waters below ice shelves. These platelets rise and accumulate beneath nearby sea ice to form a several meter thick sub-ice platelet layer. This special ice type is a unique habitat, influences sea-ice mass and energy balance, and its volume can be interpreted as an indicator for ice - ocean interactions. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In the present study, we applied a lateral constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the ice-shelf influenced fast-ice regime of Atka Bay, eastern Weddell Sea. We adapted the inversion algorithm to incorporate a sensor specific signal bias, and confirmed the reliability of the algorithm by performing a sensitivity study using synthetic data. We inverted the field data for sea-ice and sub-ice platelet-layer thickness and electrical conductivity, and calculated ice-volume fractions from platelet-layer conductivities using Archie's Law. The thickness results agreed well with drill-hole validation datasets within the uncertainty range, and the ice-volume fraction also yielded plausible results. Our findings imply that multi-frequency EM induction sounding is a suitable approach to efficiently map sea-ice and platelet-layer properties. However, we emphasize that the successful application of this technique requires a break with traditional EM sensor calibration strategies due to the need of absolute calibration with respect to a physical forward model.

  9. Predicting relationship between magnetostriction and applied field of magnetostrictive composites

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2008-03-01

    Consideration of demagnetization effect, the model used to calculate the magnetostriction of single particle under the applied field is firstly built up. Then treating the magnetostriction particulate as an eigenstrain, based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate average magnetostriction of the composites under any applied field as well as saturation is studied. Results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with increasing of particle aspect, particle volume fraction and decreasing of Young' modulus of matrix, and the influence of applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect.

  10. Phase-field simulations of coherent precipitate morphologies and coarsening kinetics

    NASA Astrophysics Data System (ADS)

    Vaithyanathan, Venugopalan

    2002-09-01

    The primary aim of this research is to enhance the fundamental understanding of coherent precipitation reactions in advanced metallic alloys. The emphasis is on a particular class of precipitation reactions which result in ordered intermetallic precipitates embedded in a disordered matrix. These precipitation reactions underlie the development of high-temperature Ni-base superalloys and ultra-light aluminum alloys. Phase-field approach, which has emerged as the method of choice for modeling microstructure evolution, is employed for this research with the focus on factors that control the precipitate morphologies and coarsening kinetics, such as precipitate volume fractions and lattice mismatch between precipitates and matrix. Two types of alloy systems are considered. The first involves L1 2 ordered precipitates in a disordered cubic matrix, in an attempt to model the gamma' precipitates in Ni-base superalloys and delta' precipitates in Al-Li alloys. The effect of volume fraction on coarsening kinetics of gamma' precipitates was investigated using two-dimensional (2D) computer simulations. With increase in volume fraction, larger fractions of precipitates were found to have smaller aspect ratios in the late stages of coarsening, and the precipitate size distributions became wider and more positively skewed. The most interesting result was associated with the effect of volume fraction on the coarsening rate constant. Coarsening rate constant as a function of volume fraction extracted from the cubic growth law of average half-edge length was found to exhibit three distinct regimes: anomalous behavior or decreasing rate constant with volume fraction at small volume fractions ( ≲ 20%), volume fraction independent or constant behavior for intermediate volume fractions (˜20--50%), and the normal behavior or increasing rate constant with volume fraction for large volume fractions ( ≳ 50%). The second alloy system considered was Al-Cu with the focus on understanding precipitation of metastable tetragonal theta'-Al 2Cu in a cubic Al solid solution matrix. In collaboration with Chris Wolverton at Ford Motor Company, a multiscale model, which involves a novel combination of first-principles atomistic calculations with a mesoscale phase-field microstructure model, was developed. Reliable energetics in the form of bulk free energy, interfacial energy and parameters for calculating the elastic energy were obtained using accurate first-principles calculations. (Abstract shortened by UMI.)

  11. A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George

    2012-07-01

    Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.

  12. Development of the voxel computational phantoms of pediatric patients and their application to organ dose assessment

    NASA Astrophysics Data System (ADS)

    Lee, Choonik

    A series of realistic voxel computational phantoms of pediatric patients were developed and then used for the radiation risk assessment for various exposure scenarios. The high-resolution computed tomographic images of live patients were utilized for the development of the five voxel phantoms of pediatric patients, 9-month male, 4-year female, 8-year female, 11-year male, and 14-year male. The phantoms were first developed as head and torso phantoms and then extended into whole body phantoms by utilizing computed tomographic images of a healthy adult volunteer. The whole body phantom series was modified to have the same anthropometrics with the most recent reference data reported by the international commission on radiological protection. The phantoms, named as the University of Florida series B, are the first complete set of the pediatric voxel phantoms having reference organ masses and total heights. As part of the dosimetry study, the investigation on skeletal tissue dosimetry methods was performed for better understanding of the radiation dose to the active bone marrow and bone endosteum. All of the currently available methodologies were inter-compared and benchmarked with the paired-image radiation transport model. The dosimetric characteristics of the phantoms were investigated by using Monte Carlo simulation of the broad parallel beams of external phantom in anterior-posterior, posterior-anterior, left lateral, right lateral, rotational, and isotropic angles. Organ dose conversion coefficients were calculated for extensive photon energies and compared with the conventional stylized pediatric phantoms of Oak Ridge National Laboratory. The multi-slice helical computed tomography exams were simulated using Monte Carlo simulation code for various exams protocols, head, chest, abdomen, pelvis, and chest-abdomen-pelvis studies. Results have found realistic estimates of the effective doses for frequently used protocols in pediatric radiology. The results were very crucial in understanding the radiation risks of the patients undergoing computed tomography. Finally, nuclear medicine simulations were performed by calculating specific absorbed fractions for multiple target-source organ pairs via Monte Carlo simulations. Specific absorbed fractions were calculated for both photon and electron so that they can be used to calculated radionuclide S-values. All of the results were tabulated for future uses and example dose assessment was performed for selected nuclides administered in nuclear medicine.

  13. Analytical Energy Dispersive X-Ray Fluorescence Measurements with a Scanty Amounts of Plant and Soil Materials

    NASA Astrophysics Data System (ADS)

    Mittal, R.; Rao, P.; Kaur, P.

    2018-01-01

    Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.

  14. Effective delayed neutron fraction and prompt neutron lifetime of Tehran research reactor mixed-core.

    PubMed

    Lashkari, A; Khalafi, H; Kazeminejad, H

    2013-05-01

    In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change.

  15. Effective delayed neutron fraction and prompt neutron lifetime of Tehran research reactor mixed-core

    PubMed Central

    Lashkari, A.; Khalafi, H.; Kazeminejad, H.

    2013-01-01

    In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change. PMID:24976672

  16. Oxygen Measurements in Liposome Encapsulated Hemoglobin

    NASA Astrophysics Data System (ADS)

    Phiri, Joshua Benjamin

    Liposome encapsulated hemoglobins (LEH's) are of current interest as blood substitutes. An analytical methodology for rapid non-invasive measurements of oxygen in artificial oxygen carriers is examined. High resolution optical absorption spectra are calculated by means of a one dimensional diffusion approximation. The encapsulated hemoglobin is prepared from fresh defibrinated bovine blood. Liposomes are prepared from hydrogenated soy phosphatidylcholine (HSPC), cholesterol and dicetylphosphate using a bath sonication method. An integrating sphere spectrophotometer is employed for diffuse optics measurements. Data is collected using an automated data acquisition system employing lock-in -amplifiers. The concentrations of hemoglobin derivatives are evaluated from the corresponding extinction coefficients using a numerical technique of singular value decomposition, and verification of the results is done using Monte Carlo simulations. In situ measurements are required for the determination of hemoglobin derivatives because most encapsulation methods invariably lead to the formation of methemoglobin, a nonfunctional form of hemoglobin. The methods employed in this work lead to high resolution absorption spectra of oxyhemoglobin and other derivatives in red blood cells and liposome encapsulated hemoglobin (LEH). The analysis using singular value decomposition method offers a quantitative means of calculating the fractions of oxyhemoglobin and other hemoglobin derivatives in LEH samples. The analytical methods developed in this work will become even more useful when production of LEH as a blood substitute is scaled up to large volumes.

  17. Computational prediction of Mg-isotope fractionation between aqueous [Mg(OH2)6]2+ and brucite

    NASA Astrophysics Data System (ADS)

    Colla, Christopher A.; Casey, William H.; Ohlin, C. André

    2018-04-01

    The fractionation factor in the magnesium-isotope fractionation between aqueous solutions of magnesium and brucite changes sign with increasing temperature, as uncovered by recent experiments. To understand this behavior, the Reduced Partition Function Ratios and isotopic fractionation factors (Δ26/24Mgbrucite-Mg(aq)) are calculated using molecular models of aqueous [Mg(OH2)6]2+ and the mineral brucite at increasing levels of density functional theory. The calculations were carried out on the [Mg(OH2)6]2+·12H2O cluster, along with different Pauling-bond-strength-conserving models of the mineral lattice of brucite. Three conclusions were reached: (i) all levels of theory overestimate bond distances in the aqua ion complex relative to Tutton's salts; (ii) the calculations predict that brucite at 298.15 K is always enriched in the heavy isotope, in contrast with experimental observations; (iii) the temperature dependencies of Wimpenny et al. (2014) and Li et al. (2014) could only be achieved by fixing the bond distances in the [Mg(OH2)6]2+·12H2O cluster to values close to those observed in crystals that trap the hydrated ion.

  18. Time-domain comparisons of power law attenuation in causal and noncausal time-fractional wave equations

    PubMed Central

    Zhao, Xiaofeng; McGough, Robert J.

    2016-01-01

    The attenuation of ultrasound propagating in human tissue follows a power law with respect to frequency that is modeled by several different causal and noncausal fractional partial differential equations. To demonstrate some of the similarities and differences that are observed in three related time-fractional partial differential equations, time-domain Green's functions are calculated numerically for the power law wave equation, the Szabo wave equation, and for the Caputo wave equation. These Green's functions are evaluated for water with a power law exponent of y = 2, breast with a power law exponent of y = 1.5, and liver with a power law exponent of y = 1.139. Simulation results show that the noncausal features of the numerically calculated time-domain response are only evident very close to the source and that these causal and noncausal time-domain Green's functions converge to the same result away from the source. When noncausal time-domain Green's functions are convolved with a short pulse, no evidence of noncausal behavior remains in the time-domain, which suggests that these causal and noncausal time-fractional models are equally effective for these numerical calculations. PMID:27250193

  19. Safety analysts training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolton, P.

    The purpose of this task was to support ESH-3 in providing Airborne Release Fraction and Respirable Fraction training to safety analysts at LANL who perform accident analysis, hazard analysis, safety analysis, and/or risk assessments at nuclear facilities. The task included preparation of materials for and the conduct of two 3-day training courses covering the following topics: safety analysis process; calculation model; aerosol physic concepts for safety analysis; and overview of empirically derived airborne release fractions and respirable fractions.

  20. Copula based prediction models: an application to an aortic regurgitation study

    PubMed Central

    Kumar, Pranesh; Shoukri, Mohamed M

    2007-01-01

    Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974

  1. A computational method for estimating the dosimetric effect of intra-fraction motion on step-and-shoot IMRT and compensator plans

    NASA Astrophysics Data System (ADS)

    Waghorn, Ben J.; Shah, Amish P.; Ngwa, Wilfred; Meeks, Sanford L.; Moore, Joseph A.; Siebers, Jeffrey V.; Langen, Katja M.

    2010-07-01

    Intra-fraction organ motion during intensity-modulated radiation therapy (IMRT) treatment can cause differences between the planned and the delivered dose distribution. To investigate the extent of these dosimetric changes, a computational model was developed and validated. The computational method allows for calculation of the rigid motion perturbed three-dimensional dose distribution in the CT volume and therefore a dose volume histogram-based assessment of the dosimetric impact of intra-fraction motion on a rigidly moving body. The method was developed and validated for both step-and-shoot IMRT and solid compensator IMRT treatment plans. For each segment (or beam), fluence maps were exported from the treatment planning system. Fluence maps were shifted according to the target position deduced from a motion track. These shifted, motion-encoded fluence maps were then re-imported into the treatment planning system and were used to calculate the motion-encoded dose distribution. To validate the accuracy of the motion-encoded dose distribution the treatment plan was delivered to a moving cylindrical phantom using a programmed four-dimensional motion phantom. Extended dose response (EDR-2) film was used to measure a planar dose distribution for comparison with the calculated motion-encoded distribution using a gamma index analysis (3% dose difference, 3 mm distance-to-agreement). A series of motion tracks incorporating both inter-beam step-function shifts and continuous sinusoidal motion were tested. The method was shown to accurately predict the film's dose distribution for all of the tested motion tracks, both for the step-and-shoot IMRT and compensator plans. The average gamma analysis pass rate for the measured dose distribution with respect to the calculated motion-encoded distribution was 98.3 ± 0.7%. For static delivery the average film-to-calculation pass rate was 98.7 ± 0.2%. In summary, a computational technique has been developed to calculate the dosimetric effect of intra-fraction motion. This technique has the potential to evaluate a given plan's sensitivity to anticipated organ motion. With knowledge of the organ's motion it can also be used as a tool to assess the impact of measured intra-fraction motion after dose delivery.

  2. Identification of Error Types in Preservice Teachers' Attempts to Create Fraction Story Problems for Specified Operations

    ERIC Educational Resources Information Center

    McAllister, Cheryl J.; Beaver, Cheryl

    2012-01-01

    The purpose of this research was to determine if recognizable error types exist in the work of preservice teachers required to create story problems for specific fraction operations. Students were given a particular single-operation fraction expression and asked to do the calculation and then create a story problem that would require the use of…

  3. Defining Product Intake Fraction to Quantify and Compare Exposure to Consumer Products.

    PubMed

    Jolliet, Olivier; Ernstoff, Alexi S; Csiszar, Susan A; Fantke, Peter

    2015-08-04

    There is a growing consciousness that exposure studies need to better cover near-field exposure associated with products use. To consistently and quantitatively compare human exposure to chemicals in consumer products, we introduce the concept of product intake fraction, as the fraction of a chemical within a product that is eventually taken in by the human population. This metric enables consistent comparison of exposures during consumer product use for different product-chemical combinations, exposure duration, exposure routes and pathways and for other life cycle stages. We present example applications of the product intake fraction concept, for two chemicals in two personal care products and two chemicals encapsulated in two articles, showing how intakes of these chemicals can primarily occur during product use. We demonstrate the utility of the product intake fraction and its application modalities within life cycle assessment and risk assessment contexts. The product intake fraction helps to provide a clear interface between the life cycle inventory and impact assessment phases, to identify best suited sentinel products and to calculate overall exposure to chemicals in consumer products, or back-calculate maximum allowable concentrations of substances inside products.

  4. Comparison of clinical semi-quantitative assessment of muscle fat infiltration with quantitative assessment using chemical shift-based water/fat separation in MR studies of the calf of post-menopausal women

    PubMed Central

    Nardo, Lorenzo; Karampinos, Dimitrios C.; Joseph, Gabby B.; Yap, Samuel P.; Baum, Thomas; Krug, Roland; Majumdar, Sharmila; Link, Thomas M.

    2013-01-01

    Objective The goal of this study was to compare the semi-quantitative Goutallier classification for fat infiltration with quantitative fat-fraction derived from a magnetic resonance imaging (MRI) chemical shift-based water/fat separation technique. Methods Sixty-two women (age 61±6 years), 27 of whom had diabetes, underwent MRI of the calf using a T1-weighted fast spin-echo sequence and a six-echo spoiled gradient-echo sequence at 3 T. Water/fat images and fat fraction maps were reconstructed using the IDEAL algorithm with T2* correction and a multi-peak model for the fat spectrum. Two radiologists scored fat infiltration on the T1-weighted images using the Goutallier classification in six muscle compartments. Spearman correlations between the Goutallier grades and the fat fraction were calculated; in addition, intra-observer and inter-observer agreement were calculated. Results A significant correlation between the clinical grading and the fat fraction values was found for all muscle compartments (P<0.0001, R values ranging from 0.79 to 0.88). Goutallier grades 0–4 had a fat fraction ranging from 3.5 to 19%. Intra-observer and inter-observer agreement values of 0.83 and 0.81 were calculated for the semi-quantitative grading. Conclusion Semi-quantitative grading of intramuscular fat and quantitative fat fraction were significantly correlated and both techniques had excellent reproducibility. However, the clinical grading was found to overestimate muscle fat. PMID:22411305

  5. Fractional order PIλ controller synthesis for steam turbine speed governing systems.

    PubMed

    Chen, Kai; Tang, Rongnian; Li, Chuang; Lu, Junguo

    2018-06-01

    The current state of the art of fractional order stability theory is hardly to build connection between the time domain analysis and frequency domain synthesis. The existing tuning methodologies for fractional order PI λ D μ are not always satisfy the given gain crossover frequency and phase margin simultaneously. To overcome the drawbacks in the existing synthesis of fractional order controller, the synthesis of optimal fractional order PI λ controller for higher-order process is proposed. According to the specified phase margin, the corresponding upper boundary of gain crossover frequency and stability surface in parameter space are obtained. Sweeping the order parameter over λ∈(0,2), the complete set of stabilizing controller which guarantees both pre-specifying phase frequency characteristic can be collected. Whereafter, the optimal fractional order PI λ controller is applied to the speed governing systems of steam turbine generation units. The numerical simulation and hardware-in-the-loop simulation demonstrate the effectiveness and satisfactory closed-loop performance of obtained fractional order PI λ controller. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Flow interaction with a flexible viscoelastic sheet

    NASA Astrophysics Data System (ADS)

    Shoele, Kourosh

    2017-11-01

    Many new engineered materials and almost all soft biological tissues are made up of heterogeneous multi-scale components with complex viscoelastic behavior. This implies that their macro constitutive relations cannot be modeled sufficiently with a typical integer-order viscoelastic relation and a more general mode is required. Here, we study the flow-induced vibration of a viscoelastic sheet where a generalized fractional constitutive model is employed to represent the relation between the bending stress and the temporal response of the structure. A new method is proposed for the calculation of the convolution integral inside the fractal model and its computational benefits will be discussed. Using a coupled fluid-structure interaction (FSI) methodology based on the immersed boundary technique, dynamic fluttering modes of the structure as a result of the fluid force will be presented and the role of fractal viscoelasticity on the dynamic of the structure will be shown. Finally, it will be argued how the stress relaxation modifies the flow-induced oscillatory responses of this benchmark problem.

  7. Proton exchange membrane fuel cell diagnosis by spectral characterization of the electrochemical noise

    NASA Astrophysics Data System (ADS)

    Maizia, R.; Dib, A.; Thomas, A.; Martemianov, S.

    2017-02-01

    Electrochemical noise analysis (ENA) has been performed for the diagnosis of proton-exchange membrane fuel cell (PEMFC) under various operating conditions. Its interest is related with the possibility of a non-invasive on-line diagnosis of a commercial fuel cell. A methodology of spectral analysis has been developed and an evaluation of the stationarity of the signal has been proposed. It has been revealed that the spectral signature of fuel cell, is a linear slope with a fractional power dependence 1/fα where α = 2 for different relative humidities and current densities. Experimental results reveal that the electrochemical noise is sensitive to the water management, especially under dry conditions. At RHH2 = 20% and RHair = 20%, spectral analysis shows a three linear slopes signature on the spectrum at low frequency range (f < 100 Hz). This results indicates that power spectral density, calculated thanks to FFT, can be used for the detection of an incorrect fuel cell water balance.

  8. Explorative solid-phase extraction (E-SPE) for accelerated microbial natural product discovery, dereplication, and purification.

    PubMed

    Månsson, Maria; Phipps, Richard K; Gram, Lone; Munro, Murray H G; Larsen, Thomas O; Nielsen, Kristian F

    2010-06-25

    Microbial natural products (NP) cover a high chemical diversity, and in consequence extracts from microorganisms are often complex to analyze and purify. A distribution analysis of calculated pK(a) values from the 34390 records in Antibase2008 revealed that within pH 2-11, 44% of all included compounds had an acidic functionality, 17% a basic functionality, and 9% both. This showed a great potential for using ion-exchange chromatography as an integral part of the separation procedure, orthogonal to the classic reversed-phase strategy. Thus, we investigated the use of an "explorative solid-phase extraction" (E-SPE) protocol using SAX, Oasis MAX, SCX, and LH-20 columns for targeted exploitation of chemical functionalities. E-SPE provides a minimum of fractions (15) for chemical and biological analyses and implicates development into a preparative scale methodology. Overall, this allows fast extract prioritization, easier dereplication, mapping of biological activities, and formulation of a purification strategy.

  9. Some Dynamical Features of Molecular Fragmentation by Electrons and Swift Ions

    NASA Astrophysics Data System (ADS)

    Montenegro, E. C.; Sigaud, L.; Wolff, W.; Luna, H.; Natalia, Ferreira

    To date, the large majority of studies on molecular fragmentation by swift charged particles have been carried out using simple molecules, for which reliable Potential Energy Curves are available to interpret the measured fragmentation yields. For complex molecules the scenario is quite different and such guidance is not available, obscuring even a simple organization of the data which are currently obtained for a large variety of molecules of biological or technological interest. In this work we show that a general and relatively simple methodology can be used to obtain a broader picture of the fragmentation pattern of an arbitrary molecule. The electronic ionization or excitation cross section of a given molecular orbital, which is the first part of the fragmentation process, can be well scaled by a simple and general procedure at high projectile velocities. The fragmentation fractions arising from each molecular orbital can then be achieved by matching the calculated ionization with the measured fragmentation cross sections. Examples for Oxygen, Chlorodifluoromethane and Pyrimidine molecules are presented.

  10. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  11. Incidence of Changes in Respiration-Induced Tumor Motion and Its Relationship With Respiratory Surrogates During Individual Treatment Fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Kathleen; Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD; McAvoy, Thomas J.

    2012-04-01

    Purpose: To determine how frequently (1) tumor motion and (2) the spatial relationship between tumor and respiratory surrogate markers change during a treatment fraction in lung and pancreas cancer patients. Methods and Materials: A Cyberknife Synchrony system radiographically localized the tumor and simultaneously tracked three respiratory surrogate markers fixed to a form-fitting vest. Data in 55 lung and 29 pancreas fractions were divided into successive 10-min blocks. Mean tumor positions and tumor position distributions were compared across 10-min blocks of data. Treatment margins were calculated from both 10 and 30 min of data. Partial least squares (PLS) regression models ofmore » tumor positions as a function of external surrogate marker positions were created from the first 10 min of data in each fraction; the incidence of significant PLS model degradation was used to assess changes in the spatial relationship between tumors and surrogate markers. Results: The absolute change in mean tumor position from first to third 10-min blocks was >5 mm in 13% and 7% of lung and pancreas cases, respectively. Superior-inferior and medial-lateral differences in mean tumor position were significantly associated with the lobe of lung. In 61% and 54% of lung and pancreas fractions, respectively, margins calculated from 30 min of data were larger than margins calculated from 10 min of data. The change in treatment margin magnitude for superior-inferior motion was >1 mm in 42% of lung and 45% of pancreas fractions. Significantly increasing tumor position prediction model error (mean {+-} standard deviation rates of change of 1.6 {+-} 2.5 mm per 10 min) over 30 min indicated tumor-surrogate relationship changes in 63% of fractions. Conclusions: Both tumor motion and the relationship between tumor and respiratory surrogate displacements change in most treatment fractions for patient in-room time of 30 min.« less

  12. Incidence of changes in respiration-induced tumor motion and its relationship with respiratory surrogates during individual treatment fractions.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dietrich, Sonja; D'Souza, Warren D

    2012-04-01

    To determine how frequently (1) tumor motion and (2) the spatial relationship between tumor and respiratory surrogate markers change during a treatment fraction in lung and pancreas cancer patients. A Cyberknife Synchrony system radiographically localized the tumor and simultaneously tracked three respiratory surrogate markers fixed to a form-fitting vest. Data in 55 lung and 29 pancreas fractions were divided into successive 10-min blocks. Mean tumor positions and tumor position distributions were compared across 10-min blocks of data. Treatment margins were calculated from both 10 and 30 min of data. Partial least squares (PLS) regression models of tumor positions as a function of external surrogate marker positions were created from the first 10 min of data in each fraction; the incidence of significant PLS model degradation was used to assess changes in the spatial relationship between tumors and surrogate markers. The absolute change in mean tumor position from first to third 10-min blocks was >5 mm in 13% and 7% of lung and pancreas cases, respectively. Superior-inferior and medial-lateral differences in mean tumor position were significantly associated with the lobe of lung. In 61% and 54% of lung and pancreas fractions, respectively, margins calculated from 30 min of data were larger than margins calculated from 10 min of data. The change in treatment margin magnitude for superior-inferior motion was >1 mm in 42% of lung and 45% of pancreas fractions. Significantly increasing tumor position prediction model error (mean ± standard deviation rates of change of 1.6 ± 2.5 mm per 10 min) over 30 min indicated tumor-surrogate relationship changes in 63% of fractions. Both tumor motion and the relationship between tumor and respiratory surrogate displacements change in most treatment fractions for patient in-room time of 30 min. Copyright © 2012. Published by Elsevier Inc.

  13. Model-Based Radiation Dose Correction for Yttrium-90 Microsphere Treatment of Liver Tumors With Central Necrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ching-Sheng; Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan; Lin, Ko-Han

    Purpose: The objectives of this study were to model and calculate the absorbed fraction {phi} of energy emitted from yttrium-90 ({sup 90}Y) microsphere treatment of necrotic liver tumors. Methods and Materials: The tumor necrosis model was proposed for the calculation of {phi} over the spherical shell region. Two approaches, the semianalytic method and the probabilistic method, were adopted. In the former method, the range--energy relationship and the sampling of electron paths were applied to calculate the energy deposition within the target region, using the straight-ahead and continuous-slowing-down approximation (CSDA) method. In the latter method, the Monte Carlo PENELOPE code wasmore » used to verify results from the first method. Results: The fraction of energy, {phi}, absorbed from {sup 90}Y by 1-cm thickness of tumor shell from microsphere distribution by CSDA with complete beta spectrum was 0.832 {+-} 0.001 and 0.833 {+-} 0.001 for smaller (r{sub T} = 5 cm) and larger (r{sub T} = 10 cm) tumors (where r is the radii of the tumor [T] and necrosis [N]). The fraction absorbed depended mainly on the thickness of the tumor necrosis configuration, rather than on tumor necrosis size. The maximal absorbed fraction {phi} that occurred in tumors without central necrosis for each size of tumor was different: 0.950 {+-} 0.000, and 0.975 {+-} 0.000 for smaller (r{sub T} = 5 cm) and larger (r{sub T} = 10 cm) tumors, respectively (p < 0.0001). Conclusions: The tumor necrosis model was developed for dose calculation of {sup 90}Y microsphere treatment of hepatic tumors with central necrosis. With this model, important information is provided regarding the absorbed fraction applicable to clinical {sup 90}Y microsphere treatment.« less

  14. Identifying Glacial Meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, L. C.; Heywood, K. J.; Jenkins, A.; Kaiser, J.

    2016-02-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (Price et al., 1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  15. Identifying glacial meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, Louise; Heywood, Karen; Jenkins, Adrian; Kaiser, Jan

    2016-04-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  16. Quantitative Analysis of First-Pass Contrast-Enhanced Myocardial Perfusion Multidetector CT Using a Patlak Plot Method and Extraction Fraction Correction During Adenosine Stress

    NASA Astrophysics Data System (ADS)

    Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.

    2011-02-01

    The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.

  17. 77 FR 60040 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Delay of Effective Date

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Wage Rule revised the methodology by which the Department calculates the prevailing wages to be paid to... the Department calculates the prevailing wages to be paid to H-2B workers and United States (U.S... effect, it will supersede and make null the prevailing wage provisions at 20 CFR 655.10(b) of the...

  18. The influence of voxel size on atom probe tomography data.

    PubMed

    Torres, K L; Daniil, M; Willard, M A; Thompson, G B

    2011-05-01

    A methodology for determining the optimal voxel size for phase thresholding in nanostructured materials was developed using an atom simulator and a model system of a fixed two-phase composition and volume fraction. The voxel size range was banded by the atom count within each voxel. Some voxel edge lengths were found to be too large, resulting in an averaging of compositional fluctuations; others were too small with concomitant decreases in the signal-to-noise ratio for phase identification. The simulated methodology was then applied to the more complex experimentally determined data set collected from a (Co(0.95)Fe(0.05))(88)Zr(6)Hf(1)B(4)Cu(1) two-phase nanocomposite alloy to validate the approach. In this alloy, Zr and Hf segregated to an intergranular amorphous phase while Fe preferentially segregated to a crystalline phase during the isothermal annealing step that promoted primary crystallization. The atom probe data analysis of the volume fraction was compared to transmission electron microscopy (TEM) dark-field imaging analysis and a lever rule analysis of the volume fraction within the amorphous and crystalline phases of the ribbon. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Preparation of reactive oxygen scavenging peptides from tilapia (Oreochromis niloticus) skin gelatin: optimization using response surface methodology.

    PubMed

    Zhuang, Yongliang; Sun, Liping

    2011-04-01

    Gelatin extracted from tilapia skin was hydrolyzed with Properase E. Response surface methodology (RSM) was applied to optimize the hydrolysis condition (temperature [T], enzyme-to-substrate ratio [E/S], pH and reaction time [t]), to obtain the hydrolysate with the highest hydroxyl radical (•OH) scavenging activity. The optimum conditions obtained were T of 44.2 °C, E/S of 2.2%, pH of 9.2, and t of 3.4 h. The predicted •OH scavenging activity of the hydrolysate under the optimum conditions was 60.7%, and the actually experimental scavenging activity was 60.8%. The hydrolysate was fractionated by ultrafiltration, and 4 fractions were collected. The fraction TSGH4 (MW<2000 Da) showed the strongest •OH scavenging activity with the highest yield. Furthermore, reactive oxygen species (ROS) scavenging activities of TSGH4 with different concentrations were investigated in 5 model systems, including superoxide anion radical (•O2), •OH, hydrogen peroxide (H2O2), peroxynitrite (ONOO-), and nitric oxide (NO•), compared with reduced glutathione (GSH). The results showed that TSGH4 significantly scavenged these ROS, and could be used as a functional ingredient in medicine and food industries.

  20. Observation of variations in the composition of sea ice in the Greenland MIZ during early summer 1983 with the Nimbus-7 SMMR. [Marginal Ice Zone (MIZ); Scanning Multichannel Microwave radiometer (SMMR)

    NASA Technical Reports Server (NTRS)

    Gloersen, P.; Campbell, W. J.

    1984-01-01

    Data acquired with the Scanning Multichannel Microwave Radiometer (SMMR) on board the Nimbus-7 Satellite for a six-week period in Fram Strait were analyzed with a procedure for calculating sea ice concentration, multiyear fraction, and ice temperature. Calculations were compared with independent observations made on the surface and from aircraft to check the validity of the calculations based on SMMR data. The calculation of multiyear fraction, which was known to be invalid near the melting point of sea ice, is discussed. The indication of multiyear ice is found to disappear a number of times, presumably corresponding to freeze/thaw cycles which occurred in this time period.

  1. The application of rational approximation in the calculation of a temperature field with a non-linear surface heat-transfer coefficient during quenching for 42CrMo steel cylinder

    NASA Astrophysics Data System (ADS)

    Cheng, Heming; Huang, Xieqing; Fan, Jiang; Wang, Honggang

    1999-10-01

    The calculation of a temperature field has a great influence upon the analysis of thermal stresses and stains during quenching. In this paper, a 42CrMo steel cylinder was used an example for investigation. From the TTT diagram of the 42CrMo steel, the CCT diagram was simulated by mathematical transformation, and the volume fraction of phase constituents was calculated. The thermal physical properties were treated as functions of temperature and the volume fraction of phase constituents. The rational approximation was applied to the finite element method. The temperature field with phase transformation and non-linear surface heat-transfer coefficients was calculated using this technique, which can effectively avoid oscillationin the numerical solution for a small time step. The experimental results of the temperature field calculation coincide with the numerical solutions.

  2. Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal.

    PubMed

    D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco

    2016-02-01

    Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Evaluation of fraction of absorbed photosynthetically active radiation products for different canopy radiation transfer regimes: methodology and results using Joint Research Center products derived from SeaWiFS against ground-based estimations.

    Treesearch

    Nadine Gobron; Bernard Pinty; Ophélie Aussedat; Jing M. Chen; Warren B. Cohen; Rasmus Fensholt; Valery Gond; Karl Fred Huemmrich; Thomas Lavergne; Frédéric Méline; Jeffrey L. Privette; Inge Sandholt; Malcolm Taberner; David P. Turner; Michael M. Verstraete; Jean-Luc Widlowski

    2006-01-01

    This paper discusses the quality and the accuracy of the Joint Research Center (JRC) fraction of absorbed photosynthetically active radiation (FAPAR) products generated from an analysis of Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data. The FAPAR value acts as an indicator of the presence and state of the vegetation and it can be estimated from remote sensing...

  4. Performance Measurement and Analysis of Certain Search Algorithms

    DTIC Science & Technology

    1979-05-01

    methodology that combines experiment and analysis in complementary and highly specialized and formalized roles, and that the richness of the domains make it ... it is difficult to determine what fraction of the observed differences between the 51 two sets is due to bias in sample set 1, and what fraction simply...given by its characteristic KMIN and KMAX functions. We posit a formal model of "knowledge" itself in which there are at least as many distinct "states

  5. Fission products and nuclear fuel behaviour under severe accident conditions part 3: Speciation of fission products in the VERDON-1 sample

    NASA Astrophysics Data System (ADS)

    Le Gall, C.; Geiger, E.; Gallais-During, A.; Pontillon, Y.; Lamontagne, J.; Hanus, E.; Ducros, G.

    2017-11-01

    Qualitative and quantitative analyses on the VERDON-1 sample made it possible to obtain valuable information on fission product behaviour in the fuel during the test. A promising methodology based on the quantitative results of post-test characterisations has been implemented to assess the release fraction of non γ-emitter fission products. The order of magnitude of the estimated release fractions for each fission product was consistent with their class of volatility.

  6. A Methodology for Loading the Advanced Test Reactor Driver Core for Experiment Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowherd, Wilson M.; Nielsen, Joseph W.; Choe, Dong O.

    In support of experiments in the ATR, a new methodology was devised for loading the ATR Driver Core. This methodology will replace the existing methodology used by the INL Neutronic Analysis group to analyze experiments. Studied in this paper was the as-run analysis for ATR Cycle 152B, specifically comparing measured lobe powers and eigenvalue calculations.

  7. Evaluating fractionated space systems - Status

    NASA Astrophysics Data System (ADS)

    Cornford, S.; Jenkins, S.; Wall, S.; Cole, B.; Bairstow, B.; Rouquette, N.; Dubos, G.; Ryan, T.; Zarifian, P.; Boutwell, J.

    DARPA has funded a number of teams to further refine its Fractionated Spacecraft vision. Several teams, including this team led by JPL, have been tasked to develop a tool for the evaluation of the Business case for a fractionated system architecture. This evaluation is to understand under what conditions and constraints the fractionated architecture make more sense (in a cost/benefit sense) than the traditional monolithic paradigm. Our approach to this evaluation is to generate and evaluate a variety of trade space options. These options include various sets of stimuli, various degrees of fractionation and various subsystem element properties. The stimuli include many not normally modeled such as technology obsolescence, funding profile changes and changes in mission objectives during the mission itself. The degrees of fractionation enable various traditional subsystem elements to be distributed across different free flyers which then act in concert as needed. This will enable key technologies to be updated as need dictates and availability allows. We have described our approach in a previous IEEE Aerospace conference paper but will briefly summarize here. Our approach to generate the Business Case evaluation is to explicitly model both the implementation and operation phases for the life cycle of a fractionated constellation. A variety of models are integrated into the Phoenix ModelCenter framework and are used to generate various intermediate data which is aggregated into the Present Strategic Value (PSV). The PSV is essentially the value (including the value of the embedded real options) minus the cost. These PSVs are calculated for a variety of configurations and scenarios including variations of various stimuli or uncertainties (e.g. supply chain delays, launch vehicle failures and orbital debris events). There are various decision options (e.g. delay, accelerate, cancel) which can now be exercised for each stimulus. We can compute the PSV for the various comb- nations and populate a tradespace. We have developed tooling to allow models to be automatically created and executed allowing us to explore large numbers of options with no human intervention. The methodology, models and the process by which they are integrated were a key subset of the previous paper. We will present the results of the Business Case analyses for a variety of configurations and scenarios, present the populated tradespace, show the GUI we have developed to facilitate the use of the tool and discuss the implications of both the results and our work to date. We will also discuss future work and possible approaches for that work.

  8. Fractional order uncertainty estimator based hierarchical sliding mode design for a class of fractional order non-holonomic chained system.

    PubMed

    Deepika; Kaur, Sandeep; Narayan, Shiv

    2018-06-01

    This paper proposes a novel fractional order sliding mode control approach to address the issues of stabilization as well as tracking of an N-dimensional extended chained form of fractional order non-holonomic system. Firstly, the hierarchical fractional order terminal sliding manifolds are selected to procure the desired objectives in finite time. Then, a sliding mode control law is formulated which provides robustness against various system uncertainties or external disturbances. In addition, a novel fractional order uncertainty estimator is deduced mathematically to estimate and mitigate the effects of uncertainties, which also excludes the requirement of their upper bounds. Due to the omission of discontinuous control action, the proposed algorithm ensures a chatter-free control input. Moreover, the finite time stability of the closed loop system has been proved analytically through well known Mittag-Leffler and Fractional Lyapunov theorems. Finally, the proposed methodology is validated with MATLAB simulations on two examples including an application of fractional order non-holonomic wheeled mobile robot and its performances are also compared with the existing control approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. On energetic prerequisites of attracting electrons

    NASA Astrophysics Data System (ADS)

    Sundholm, Dage

    2014-06-01

    The internal reorganization energy and the zero-point vibrational energy (ZPE) of fractionally charged molecules embedded in molecular materials are discussed. The theory for isolated open quantum systems is taken as the starting point. It is shown that for isolated molecules the internal reorganization-energy function and its slope, i.e., the chemical potential of an open molecular system are monotonically decreasing functions with respect to increasing amount of negative excess charge (q) in the range of q = [0, 1]. Calculations of the ZPE for fractionally charged molecules show that the ZPE may have a minimum for fractional occupation. The calculations show that the internal reorganization energy and changes in the ZPE are of the same order of magnitude with different behavior as a function of the excess charge. The sum of the contributions might favor molecules with fractional occupation of the molecular units and partial delocalization of the excess electrons in solid-state materials also when considering Coulomb repulsion between the excess electrons. The fractional electrons are then coherently distributed on many molecules of the solid-state material forming a condensate of attracting electrons, which is crucial for the superconducting state.

  10. On energetic prerequisites of attracting electrons.

    PubMed

    Sundholm, Dage

    2014-06-21

    The internal reorganization energy and the zero-point vibrational energy (ZPE) of fractionally charged molecules embedded in molecular materials are discussed. The theory for isolated open quantum systems is taken as the starting point. It is shown that for isolated molecules the internal reorganization-energy function and its slope, i.e., the chemical potential of an open molecular system are monotonically decreasing functions with respect to increasing amount of negative excess charge (q) in the range of q = [0, 1]. Calculations of the ZPE for fractionally charged molecules show that the ZPE may have a minimum for fractional occupation. The calculations show that the internal reorganization energy and changes in the ZPE are of the same order of magnitude with different behavior as a function of the excess charge. The sum of the contributions might favor molecules with fractional occupation of the molecular units and partial delocalization of the excess electrons in solid-state materials also when considering Coulomb repulsion between the excess electrons. The fractional electrons are then coherently distributed on many molecules of the solid-state material forming a condensate of attracting electrons, which is crucial for the superconducting state.

  11. An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet

    NASA Astrophysics Data System (ADS)

    Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon

    2015-08-01

    The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.

  12. Thermal control of low-pressure fractionation processes. [in basaltic magma solidification

    NASA Technical Reports Server (NTRS)

    Usselman, T. M.; Hodge, D. S.

    1978-01-01

    Thermal models detailing the solidification paths for shallow basaltic magma chambers (both open and closed systems) were calculated using finite-difference techniques. The total solidification time for closed chambers are comparable to previously published calculations; however, the temperature-time paths are not. These paths are dependent on the phase relations and the crystallinity of the system, because both affect the manner in which the latent heat of crystallization is distributed. In open systems, where a chamber would be periodically replenished with additional parental liquid, calculations indicate that the possibility is strong that a steady-state temperature interval is achieved near a major phase boundary. In these cases it is straightforward to analyze fractionation models of the basaltic liquid evolution and their corresponding cumulate sequences. This steady thermal fractionating state can be invoked to explain large amounts of erupted basalts of similar composition over long time periods from the same volcanic center and some rhythmically layered basic cumulate sequences.

  13. Polonium assimilation and retention in mule deer and pronghorn antelope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sejkora, K.J.

    Excretion kinetics and tissue distribution of polonium-210 in mule deer and pronghorn were studied. Each animal in a captive herd of 7 mule deer and 2 pronghorn received an intraruminal injection of 4.4 ..mu..Ci of polonium chloride. Feces and urine were collected periodically over a 43-day period and daily excretion rate for each pathway was regressed as a function of time. Assimilation fractions of 0.40 and 0.51 were calculated for mule deer (n=2) and 0.60 for a pronghorn. Body burden retention functions were calculated from integrated excretion rate functions. Polonium burdens in muscle, liver, and kidney were calculated as amore » fraction of body burden from serially-sacrificed animals. Background tissue burdens in mule deer were comparable to those of other ruminants reported in the literature. Hypothetical cases were assumed which combined feeding rate of mule deer, forage concentrations of polonium, retention function, tissue burden fraction, and human intake to estimate human radiation dose. 26 references.« less

  14. A geometric model for evaluating the effects of inter-fraction rectal motion during prostate radiotherapy

    NASA Astrophysics Data System (ADS)

    Pavel-Mititean, Luciana M.; Rowbottom, Carl G.; Hector, Charlotte L.; Partridge, Mike; Bortfeld, Thomas; Schlegel, Wolfgang

    2004-06-01

    A geometric model is presented which allows calculation of the dosimetric consequences of rectal motion in prostate radiotherapy. Variations in the position of the rectum are measured by repeat CT scanning during the courses of treatment of five patients. Dose distributions are calculated by applying the same conformal treatment plan to each imaged fraction and rectal dose-surface histograms produced. The 2D model allows isotropic expansion and contraction in the plane of each CT slice. By summing the dose to specific volume elements tracked by the model, composite dose distributions are produced that explicitly include measured inter-fraction motion for each patient. These are then used to estimate effective dose-surface histograms (DSHs) for the entire treatment. Results are presented showing the magnitudes of the measured target and rectal motion and showing the effects of this motion on the integral dose to the rectum. The possibility of using such information to calculate normal tissue complication probabilities (NTCP) is demonstrated and discussed.

  15. Method and apparatus for probing relative volume fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jandrasits, W.G.; Kikta, T.J.

    1996-12-31

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining there between a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirelymore » of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.« less

  16. A variational Bayes discrete mixture test for rare variant association

    PubMed Central

    Logsdon, Benjamin A.; Dai, James Y.; Auer, Paul L.; Johnsen, Jill M.; Ganesh, Santhi K.; Smith, Nicholas L.; Wilson, James G.; Tracy, Russell P.; Lange, Leslie A.; Jiao, Shuo; Rich, Stephen S.; Lettre, Guillaume; Carlson, Christopher S.; Jackson, Rebecca D.; O’Donnell, Christopher J.; Wurfel, Mark M.; Nickerson, Deborah A.; Tang, Hua; Reiner, Alexander P.; Kooperberg, Charles

    2014-01-01

    Recently, many statistical methods have been proposed to test for associations between rare genetic variants and complex traits. Most of these methods test for association by aggregating genetic variations within a predefined region, such as a gene. Although there is evidence that “aggregate” tests are more powerful than the single marker test, these tests generally ignore neutral variants and therefore are unable to identify specific variants driving the association with phenotype. We propose a novel aggregate rare-variant test that explicitly models a fraction of variants as neutral, tests associations at the gene-level, and infers the rare-variants driving the association. Simulations show that in the practical scenario where there are many variants within a given region of the genome with only a fraction causal our approach has greater power compared to other popular tests such as the Sequence Kernel Association Test (SKAT), the Weighted Sum Statistic (WSS), and the collapsing method of Morris and Zeggini (MZ). Our algorithm leverages a fast variational Bayes approximate inference methodology to scale to exome-wide analyses, a significant computational advantage over exact inference model selection methodologies. To demonstrate the efficacy of our methodology we test for associations between von Willebrand Factor (VWF) levels and VWF missense rare-variants imputed from the National Heart, Lung, and Blood Institute’s Exome Sequencing project into 2,487 African Americans within the VWF gene. Our method suggests that a relatively small fraction (~10%) of the imputed rare missense variants within VWF are strongly associated with lower VWF levels in African Americans. PMID:24482836

  17. A variational Bayes discrete mixture test for rare variant association.

    PubMed

    Logsdon, Benjamin A; Dai, James Y; Auer, Paul L; Johnsen, Jill M; Ganesh, Santhi K; Smith, Nicholas L; Wilson, James G; Tracy, Russell P; Lange, Leslie A; Jiao, Shuo; Rich, Stephen S; Lettre, Guillaume; Carlson, Christopher S; Jackson, Rebecca D; O'Donnell, Christopher J; Wurfel, Mark M; Nickerson, Deborah A; Tang, Hua; Reiner, Alexander P; Kooperberg, Charles

    2014-01-01

    Recently, many statistical methods have been proposed to test for associations between rare genetic variants and complex traits. Most of these methods test for association by aggregating genetic variations within a predefined region, such as a gene. Although there is evidence that "aggregate" tests are more powerful than the single marker test, these tests generally ignore neutral variants and therefore are unable to identify specific variants driving the association with phenotype. We propose a novel aggregate rare-variant test that explicitly models a fraction of variants as neutral, tests associations at the gene-level, and infers the rare-variants driving the association. Simulations show that in the practical scenario where there are many variants within a given region of the genome with only a fraction causal our approach has greater power compared to other popular tests such as the Sequence Kernel Association Test (SKAT), the Weighted Sum Statistic (WSS), and the collapsing method of Morris and Zeggini (MZ). Our algorithm leverages a fast variational Bayes approximate inference methodology to scale to exome-wide analyses, a significant computational advantage over exact inference model selection methodologies. To demonstrate the efficacy of our methodology we test for associations between von Willebrand Factor (VWF) levels and VWF missense rare-variants imputed from the National Heart, Lung, and Blood Institute's Exome Sequencing project into 2,487 African Americans within the VWF gene. Our method suggests that a relatively small fraction (~10%) of the imputed rare missense variants within VWF are strongly associated with lower VWF levels in African Americans.

  18. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, M; Shobhit University, Meerut, Uttar Pradesh; Manjhi, J

    Purpose: This study evaluated dosimetric parameters for actual treatment plans versus decay corrected treatment plans for cervical HDR brachytherapy. Methods: 125 plans of 25 patients, who received 5 fractions of HDR brachytherapy, were evaluated in this study. Dose was prescribed to point A (ICRU-38) and High risk clinical tumor volume (HR-CTV) and organs at risk (OAR) were, retrospectively, delineated on original CT images by treating physician. First HDR plan was considered as reference plan and decay correction was applied to calculate treatment time for subsequent fractions, and was applied, retrospectively, to determine point A, HR-CTV D90, and rectum and bladdermore » doses. Results: The differences between mean point A reference doses and the point A doses of the plans computed using decay times were found to be 1.05%±0.74% (−2.26% to 3.26%) for second fraction; −0.25%±0.84% (−3.03% to 3.29%) for third fraction; 0.04%±0.70% (−2.68% to 2.56%) for fourth fraction and 0.30%±0.81% (−3.93% to 2.67%) for fifth fraction. Overall mean point A dose difference, for all fractions, was 0.29%±0.38% (within ± 5%). Mean rectum and bladder dose differences were calculated to be −3.46%±0.12% and −2.47%±0.09%, for points, respectively, and −1.72%±0.09% and −0.96%±0.06%, for D2cc, respectively. HR-CTV D90 mean dose difference was found to be −1.67% ± 0.11%. There was no statistically significant difference between the reference planned point A doses and that calculated using decay time to the subsequent fractions (p<0.05). Conclusion: This study reveals that a decay corrected treatment will provide comparable dosimetric results and can be utilized for subsequent fractions of cervical HDR brachytherapy instead of actual treatment planning. This approach will increase efficiency, decrease workload, reduce patient observation time between applicator insertion and treatment delivery. This would be particularly useful for institutions with limited resources or large patient populations with limited access to care.« less

  20. Determination of the steam volume fraction in the event of loss of cooling of the spent fuel storage pool

    NASA Astrophysics Data System (ADS)

    Sledkov, R. M.; Galkin, I. Yu.; Stepanov, O. E.; Strebnev, N. A.

    2017-01-01

    When one solves engineering problems related to the cooling of fuel assemblies (FAs) in a spent fuel storage pool (SFSP) and the assessment of nuclear safety of FA storage in an SFSP in the initial event of loss of SFSP cooling, it is essential to determine the coolant density and, consequently, steam volume fractions φ in bundles of fuel elements at a pressure of 0.1-0.5 MPa. Such formulas for calculating φ that remain valid in a wide range of operating parameters and geometric shapes of channels and take the conditions of loss of SFSP cooling into account are currently almost lacking. The results of systematization and analysis of the available formulas for φ are reported in the present study. The calculated values were compared with the experimental data obtained in the process of simulating the conditions of FA cooling in an SFSP in the event of loss of its cooling. Six formulas for calculating the steam volume fraction, which were used in this comparison, were chosen from a total of 11 considered relations. As a result, the formulas producing the most accurate values of φ in the conditions of loss of SFSP cooling were selected. In addition, a relation that allows one to perform more accurate calculations of steam volume fractions in the conditions of loss of SFSP cooling was derived based on the Fedorov formula in the two-group approximation.

  1. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  2. 76 FR 72134 - Annual Charges for Use of Government Lands

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ... revise the methodology used to compute these annual charges. Under the proposed rule, the Commission would create a fee schedule based on the U.S. Bureau of Land Management's (BLM) methodology for calculating rental rates for linear rights of way. This methodology includes a land value per acre, an...

  3. Comparing the Energy Content of Batteries, Fuels, and Materials

    ERIC Educational Resources Information Center

    Balsara, Nitash P.; Newman, John

    2013-01-01

    A methodology for calculating the theoretical and practical specific energies of rechargeable batteries, fuels, and materials is presented. The methodology enables comparison of the energy content of diverse systems such as the lithium-ion battery, hydrocarbons, and ammonia. The methodology is relevant for evaluating the possibility of using…

  4. Recent advances in jointed quantum mechanics and molecular mechanics calculations of biological macromolecules: schemes and applications coupled to ab initio calculations.

    PubMed

    Hagiwara, Yohsuke; Tateno, Masaru

    2010-10-20

    We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd

  5. Calculation of spontaneous emission from a V-type three-level atom in photonic crystals using fractional calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chih-Hsien; Hsieh, Wen-Feng; Institute of Electro-Optical Science and Engineering, National Cheng Kung University, 1 Dahsueh Rd., Tainan 701, Taiwan

    2011-07-15

    Fractional time derivative, an abstract mathematical operator of fractional calculus, is used to describe the real optical system of a V-type three-level atom embedded in a photonic crystal. A fractional kinetic equation governing the dynamics of the spontaneous emission from this optical system is obtained as a fractional Langevin equation. Solving this fractional kinetic equation by fractional calculus leads to the analytical solutions expressed in terms of fractional exponential functions. The accuracy of the obtained solutions is verified through reducing the system into the special cases whose results are consistent with the experimental observation. With accurate physical results and avoidingmore » the complex integration for solving this optical system, we propose fractional calculus with fractional time derivative as a better mathematical method to study spontaneous emission dynamics from the optical system with non-Markovian dynamics.« less

  6. Optically stimulated luminescence dating of sediments

    NASA Astrophysics Data System (ADS)

    Troja, S. O.; Amore, C.; Barbagallo, G.; Burrafato, G.; Forzese, R.; Geremia, F.; Gueli, A. M.; Marzo, F.; Pirnaci, D.; Russo, M.; Turrisi, E.

    2000-04-01

    Optically stimulated luminescence (OSL) dating methodology was applied on the coarse grain fraction (100÷500 μm thick) of quartz crystals (green light stimulated luminescence, GLSL) and feldspar crystals (infrared stimulated luminescence, IRSL) taken from sections at different depths of cores bored in various coastal lagoons (Longarini, Cuba, Bruno) in the south-east coast of Sicily. The results obtained give a sequence of congruent relative ages and maximum absolute ages compatible with the sedimentary structure, thus confirming the excellent potential of the methodology.

  7. A new approach to assessing the water footprint of wine: an Italian case study.

    PubMed

    Lamastra, Lucrezia; Suciu, Nicoleta Alina; Novelli, Elisa; Trevisan, Marco

    2014-08-15

    Agriculture is the largest freshwater consumer, accounting for 70% of the world's water withdrawal. Water footprints (WFs) are being increasingly used to indicate the impacts of water use by production systems. A new methodology to assess WF of wine was developed in the framework of the V.I.V.A. project (Valutazione Impatto Viticoltura sull'Ambiente), launched by the Italian Ministry for the Environment in 2011 to improve the Italian wine sector's sustainability. The new methodology has been developed that enables different vines from the same winery to be compared. This was achieved by calculating the gray water footprint, following Tier III approach proposed by Hoekstra et al. (2011). The impact of water use during the life cycle of grape-wine production was assessed for six different wines from the same winery in Sicily, Italy using both the newly developed methodology (V.I.V.A.) and the classical methodology proposed by the Water Footprint Network (WFN). In all cases green water was the largest contributor to WF, but the new methodology also detected differences between vines of the same winery. Furthermore, V.I.V.A. methodology assesses water body contamination by pesticides application whereas the WFN methodology considers just fertilization. This fact ended highlights the highest WF of vineyard 4 calculated by V.I.V.A. if compared with the WF calculated with WFN methodology. Comparing the WF of wine produced with grapes from the six different wines, the factors most greatly influencing the results obtained in this study were: distance from the water body, fertilization rate, amount and eco-toxicological behavior of the active ingredients used. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  9. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  10. Hypofractionation Results in Reduced Tumor Cell Kill Compared to Conventional Fractionation for Tumors With Regions of Hypoxia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, David J., E-mail: david.j.carlson@yale.ed; Yale University School of Medicine, Department of Therapeutic Radiology, New Haven, CT; Keall, Paul J.

    2011-03-15

    Purpose: Tumor hypoxia has been observed in many human cancers and is associated with treatment failure in radiation therapy. The purpose of this study is to quantify the effect of different radiation fractionation schemes on tumor cell killing, assuming a realistic distribution of tumor oxygenation. Methods and Materials: A probability density function for the partial pressure of oxygen in a tumor cell population is quantified as a function of radial distance from the capillary wall. Corresponding hypoxia reduction factors for cell killing are determined. The surviving fraction of a tumor consisting of maximally resistant cells, cells at intermediate levels ofmore » hypoxia, and normoxic cells is calculated as a function of dose per fraction for an equivalent tumor biological effective dose under normoxic conditions. Results: Increasing hypoxia as a function of distance from blood vessels results in a decrease in tumor cell killing for a typical radiotherapy fractionation scheme by a factor of 10{sup 5} over a distance of 130 {mu}m. For head-and-neck cancer and prostate cancer, the fraction of tumor clonogens killed over a full treatment course decreases by up to a factor of {approx}10{sup 3} as the dose per fraction is increased from 2 to 24 Gy and from 2 to 18 Gy, respectively. Conclusions: Hypofractionation of a radiotherapy regimen can result in a significant decrease in tumor cell killing compared to standard fractionation as a result of tumor hypoxia. There is a potential for large errors when calculating alternate fractionations using formalisms that do not account for tumor hypoxia.« less

  11. Application of a prospective model for calculating worker exposure due to the air pathway for operations in a laboratory.

    PubMed

    Grimbergen, T W M; Wiegman, M M

    2007-01-01

    In order to arrive at recommendations for guidelines on maximum allowable quantities of radioactive material in laboratories, a proposed mathematical model was used for the calculation of transfer fractions for the air pathway. A set of incident scenarios was defined, including spilling, leakage and failure of the fume hood. For these 'common incidents', dose constraints of 1 mSv and 0.1 mSv are proposed in case the operations are being performed in a controlled area and supervised area, respectively. In addition, a dose constraint of 1 microSv is proposed for each operation under regular working conditions. Combining these dose constraints and the transfer fractions calculated with the proposed model, maximum allowable quantities were calculated for different laboratory operations and situations. Provided that the calculated transfer fractions can be experimentally validated and the dose constraints are acceptable, it can be concluded from the results that the dose constraint for incidents is the most restrictive one. For non-volatile materials this approach leads to quantities much larger than commonly accepted. In those cases, the results of the calculations in this study suggest that limitation of the quantity of radioactive material, which can be handled safely, should be based on other considerations than the inhalation risks. Examples of such considerations might be the level of external exposure, uncontrolled spread of radioactive material by surface contamination, emissions in the environment and severe accidents like fire.

  12. Comparison of clinical semi-quantitative assessment of muscle fat infiltration with quantitative assessment using chemical shift-based water/fat separation in MR studies of the calf of post-menopausal women.

    PubMed

    Alizai, Hamza; Nardo, Lorenzo; Karampinos, Dimitrios C; Joseph, Gabby B; Yap, Samuel P; Baum, Thomas; Krug, Roland; Majumdar, Sharmila; Link, Thomas M

    2012-07-01

    The goal of this study was to compare the semi-quantitative Goutallier classification for fat infiltration with quantitative fat-fraction derived from a magnetic resonance imaging (MRI) chemical shift-based water/fat separation technique. Sixty-two women (age 61 ± 6 years), 27 of whom had diabetes, underwent MRI of the calf using a T1-weighted fast spin-echo sequence and a six-echo spoiled gradient-echo sequence at 3 T. Water/fat images and fat fraction maps were reconstructed using the IDEAL algorithm with T2* correction and a multi-peak model for the fat spectrum. Two radiologists scored fat infiltration on the T1-weighted images using the Goutallier classification in six muscle compartments. Spearman correlations between the Goutallier grades and the fat fraction were calculated; in addition, intra-observer and inter-observer agreement were calculated. A significant correlation between the clinical grading and the fat fraction values was found for all muscle compartments (P < 0.0001, R values ranging from 0.79 to 0.88). Goutallier grades 0-4 had a fat fraction ranging from 3.5 to 19%. Intra-observer and inter-observer agreement values of 0.83 and 0.81 were calculated for the semi-quantitative grading. Semi-quantitative grading of intramuscular fat and quantitative fat fraction were significantly correlated and both techniques had excellent reproducibility. However, the clinical grading was found to overestimate muscle fat. Fat infiltration of muscle commonly occurs in many metabolic and neuromuscular diseases. • Image-based semi-quantitative classifications for assessing fat infiltration are not well validated. • Quantitative MRI techniques provide an accurate assessment of muscle fat.

  13. An Evaluation of Aircraft Emissions Inventory Methodology by Comparisons with Reported Airline Data

    NASA Technical Reports Server (NTRS)

    Daggett, D. L.; Sutkus, D. J.; DuBois, D. P.; Baughcum, S. L.

    1999-01-01

    This report provides results of work done to evaluate the calculation methodology used in generating aircraft emissions inventories. Results from the inventory calculation methodology are compared to actual fuel consumption data. Results are also presented that show the sensitivity of calculated emissions to aircraft payload factors. Comparisons of departures made, ground track miles flown and total fuel consumed by selected air carriers were made between U.S. Dept. of Transportation (DOT) Form 41 data reported for 1992 and results of simplified aircraft emissions inventory calculations. These comparisons provide an indication of the magnitude of error that may be present in aircraft emissions inventories. To determine some of the factors responsible for the errors quantified in the DOT Form 41 analysis, a comparative study of in-flight fuel flow data for a specific operator's 747-400 fleet was conducted. Fuel consumption differences between the studied aircraft and the inventory calculation results may be attributable to several factors. Among these are longer flight times, greater actual aircraft weight and performance deterioration effects for the in-service aircraft. Results of a parametric study on the variation in fuel use and NOx emissions as a function of aircraft payload for different aircraft types are also presented.

  14. Fundamental studies in isotope chemistry. Progress report, 1 August 1982-1 August 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigeleisen, J.

    1983-01-01

    Interest in a search for superheavy elements present in nature as a remnant of the big bang or through continuous production by cosmic rays has prompted us to study the isotope chemistry of superheavy elements. Calculations of the fractionation factors of superheavy elements of masses 10, 100, 1000, and in the form of isotopes of hydrogen, carbon, selenium and uranium against the light naturally occurring isotope of the element show that the superheavy isotope, even of infinite mass, will not be sufficiently fractionated in single stage natural processes to obscure its chemistry. Calculations have been made of the elementary separationmore » factors of superheavy isotopes of carbon and oxygen by fractional distillation of CO at 80/sup 0/K. The fractionation factors are discussed in terms of a model for liquid CO in good agreement with experimental data on /sup 13/C/sup 16/O and /sup 12/C/sup 18/O. Calculations for very heavy isotopic forms of CO reveal for the first time the coupling effect between translation and internal vibration in the liquid. It is shown that a 1ow temperature distillation plant, such as the Los Alamos COLA plant, has a significant potential for enrichment of superheavy isotopes of carbon. The maximum enrichment factor is 10/sup 55/.« less

  15. A Methodology for Calculating EGS Electricity Generation Potential Based on the Gringarten Model for Heat Extraction From Fractured Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGSmore » electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.« less

  16. Development of 3D pseudo pin-by-pin calculation methodology in ANC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Mayhue, L.; Huria, H.

    2012-07-01

    Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less

  17. Methodological Reporting of Randomized Trials in Five Leading Chinese Nursing Journals

    PubMed Central

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Background Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. Methods In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. Results In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34±0.97 (Mean ± SD). No RCT reported descriptions and changes in “trial design,” changes in “outcomes” and “implementation,” or descriptions of the similarity of interventions for “blinding.” Poor reporting was found in detailing the “settings of participants” (13.1%), “type of randomization sequence generation” (1.8%), calculation methods of “sample size” (0.4%), explanation of any interim analyses and stopping guidelines for “sample size” (0.3%), “allocation concealment mechanism” (0.3%), additional analyses in “statistical methods” (2.1%), and targeted subjects and methods of “blinding” (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of “participants,” “interventions,” and definitions of the “outcomes” and “statistical methods.” The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. Conclusions The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods. PMID:25415382

  18. Radiative lifetimes, branching fractions, and oscillator strengths of some levels in Be I

    NASA Astrophysics Data System (ADS)

    Wang, Xinghao; Quinet, Pascal; Li, Qiu; Yu, Qi; Li, Yongfan; Wang, Qian; Gong, Yimin; Dai, Zhenwen

    2018-06-01

    Radiative lifetimes of five levels in Be I lying in the energy range 64,506.45-71,160.52 cm-1 were measured by the time-resolved laser-induced fluorescence technique. These new data, together with previously measured radiative lifetimes and two reliable calculated lifetimes, were combined with branching fractions obtained from pseudo-relativistic Hartree-Fock calculations to deduce semi-empirical transition probabilities and oscillator strengths for 90 Be I spectral lines involving upper levels ranging from 42,565.35 to 72,251.27 cm-1.

  19. Continuum model for hydrogen pickup in zirconium alloys of LWR fuel cladding

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Zheng, Ming-Jie; Szlufarska, Izabela; Morgan, Dane

    2017-04-01

    A continuum model for calculating the time-dependent hydrogen pickup fractions in various Zirconium alloys under steam and pressured water oxidation has been developed in this study. Using only one fitting parameter, the effective hydrogen gas partial pressure at the oxide surface, a qualitative agreement is obtained between the predicted and previously measured hydrogen pickup fractions. The calculation results therefore demonstrate that H diffusion through the dense oxide layer plays an important role in the hydrogen pickup process. The limitations and possible improvement of the model are also discussed.

  20. Experimental Determination of Carbon Isotope Fractionation in C-O-H-Fluids and the Carbonate-melt - Graphite System at High Temperatures

    NASA Astrophysics Data System (ADS)

    Kueter, N.; Schmidt, M. W.; Lilley, M. D.; Bernasconi, S. M.

    2017-12-01

    The understanding of deep-earth carbon fluxes depends greatly on the investigation of carbon isotope systematics in C-O-H-fluids and carbon minerals, such as graphite and diamond (C0). The isotope fractionation factors between the different C-phases and species (in e.g. a fluid) thus govern the observed isotope fractionation patterns. C-isotope fractionation factors relevant for high temperatures are mainly derived from theoretical calculations [e.g. 1,2,3] and, with few exceptions, lack experimental determinations [e.g. 4]. Hundreds of own experiments aimed at equilibrating elemental carbon (C0, graphite/diamond) with C-O-H-fluids demonstrate that kinetics reigns as no system would be closed for H on time scales and temperatures allowing for graphite to equilibrate. To overcome this problem, we performed two studies to determine the C-isotope fractionation in 1) the CO2-CO-CH4 system and 2) the carbonate-melt - graphite system. Equilibrium C-isotope fractionation factors were obtained for CO2 - CO and CH4 - CO pairs (600 - 1200°C) and graphite - Na2CO3/CaCO3melt (900 - 1500°C). Combined with the already available fractionation data for the CaCO3-CO2 pair (400-950°C) from Chacko et al. [4], we determined experimentally based C-isotope fractionation factors for C0 - CH4 and CO2 - C0 pairs by 1) Δ13CCO2-graphite = Δ13CCO2-carbonate + Δ13CCarbonate-graphite and 2) Δ13Cgraphite-CH4 = Δ13CCO2-CH4 - Δ13CCO2-graphite . Current calculated fractionation factors relevant for mantle temperatures (1100 - 1500°C) suggest C-isotope partitioning in the CO2 - C0 pair on the order of 4.2 to 2.4‰, about 2‰ less than predicted by theoretically derived factors [3]. In contrast, our calculations suggest fractionation of about 1.4 to 1.1‰ for the C0 - CH4 pair, about 1‰ higher than expected by theory [3]. [1] Richet et al. (1977) Ann. Rev. Earth Planet. Sci.; [2] Polyakov & Kharlashina (1995) GCA; [3] Bottinga (1969) GCA; [4] Chacko et al. (2001) Rev Mineral Geochem

  1. Optical band gap in a cholesteric elastomer doped by metallic nanospheres

    NASA Astrophysics Data System (ADS)

    Hernández, Julio C.; Reyes, J. Adrián

    2017-12-01

    We analyzed the optical band gaps for axially propagating electromagnetic waves throughout a metallic doped cholesteric elastomer. The composed medium is made of metallic nanospheres (silver) randomly dispersed in a cholesteric elastomer liquid crystal whose dielectric properties can be represented by a resonant effective uniaxial tensor. We found that the band gap properties of the periodic system greatly depend on the volume fraction of nanoparticles in the cholesteric elastomer. In particular, we observed a displacement of the reflection band for quite small fraction volumes whereas for larger values of this fraction there appears a secondary band in the higher frequency region. We also have calculated the transmittance and reflectance spectra for our system. These calculations verify the mentioned band structure and provide additional information about the polarization features of the radiation.

  2. Bridges in complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Ang-Kun; Tian, Liang; Liu, Yang-Yu

    2018-01-01

    A bridge in a graph is an edge whose removal disconnects the graph and increases the number of connected components. We calculate the fraction of bridges in a wide range of real-world networks and their randomized counterparts. We find that real networks typically have more bridges than their completely randomized counterparts, but they have a fraction of bridges that is very similar to their degree-preserving randomizations. We define an edge centrality measure, called bridgeness, to quantify the importance of a bridge in damaging a network. We find that certain real networks have a very large average and variance of bridgeness compared to their degree-preserving randomizations and other real networks. Finally, we offer an analytical framework to calculate the bridge fraction and the average and variance of bridgeness for uncorrelated random networks with arbitrary degree distributions.

  3. Evaluation of stratospheric age of air from CF4, C2F6, C3F8, CHF3, HFC-125, HFC-227ea and SF6; implications for the calculations of halocarbon lifetimes, fractional release factors and ozone depletion potentials

    NASA Astrophysics Data System (ADS)

    Leedham Elvidge, Emma; Bönisch, Harald; Brenninkmeijer, Carl A. M.; Engel, Andreas; Fraser, Paul J.; Gallacher, Eileen; Langenfelds, Ray; Mühle, Jens; Oram, David E.; Ray, Eric A.; Ridley, Anna R.; Röckmann, Thomas; Sturges, William T.; Weiss, Ray F.; Laube, Johannes C.

    2018-03-01

    In a changing climate, potential stratospheric circulation changes require long-term monitoring. Stratospheric trace gas measurements are often used as a proxy for stratospheric circulation changes via the mean age of air values derived from them. In this study, we investigated five potential age of air tracers - the perfluorocarbons CF4, C2F6 and C3F8 and the hydrofluorocarbons CHF3 (HFC-23) and HFC-125 - and compare them to the traditional tracer SF6 and a (relatively) shorter-lived species, HFC-227ea. A detailed uncertainty analysis was performed on mean ages derived from these new tracers to allow us to confidently compare their efficacy as age tracers to the existing tracer, SF6. Our results showed that uncertainties associated with the mean age derived from these new age tracers are similar to those derived from SF6, suggesting that these alternative compounds are suitable in this respect for use as age tracers. Independent verification of the suitability of these age tracers is provided by a comparison between samples analysed at the University of East Anglia and the Scripps Institution of Oceanography. All five tracers give younger mean ages than SF6, a discrepancy that increases with increasing mean age. Our findings qualitatively support recent work that suggests that the stratospheric lifetime of SF6 is significantly less than the previous estimate of 3200 years. The impact of these younger mean ages on three policy-relevant parameters - stratospheric lifetimes, fractional release factors (FRFs) and ozone depletion potentials - is investigated in combination with a recently improved methodology to calculate FRFs. Updates to previous estimations for these parameters are provided.

  4. Multidimensional Approach for Tsunami Vulnerability Assessment: Framing the Territorial Impacts in Two Municipalities in Portugal.

    PubMed

    Tavares, Alexandre Oliveira; Barros, José Leandro; Santos, Angela

    2017-04-01

    This study presents a new multidimensional methodology for tsunami vulnerability assessment that combines the morphological, structural, social, and tax component of vulnerability. This new approach can be distinguished from previous methodologies that focused primarily on the evaluation of potentially affected buildings and did not use tsunami numerical modeling. The methodology was applied to the Figueira da Foz and Vila do Bispo municipalities in Portugal. For each area, the potential tsunami-inundated areas were calculated considering the 1755 Lisbon tsunami, which is the greatest disaster caused by natural hazards that ever occurred in Portugal. Furthermore, the four components of the vulnerability were calculated to obtain a composite vulnerability index. This methodology enables us to differentiate the two areas in their vulnerability, highlighting the characteristics of the territory components. This methodology can be a starting point for the creation of a local assessment framework at the municipal scale related to tsunami risk. In addition, the methodology is an important support for the different local stakeholders. © 2016 Society for Risk Analysis.

  5. Some interesting examples of binormal degeneracy and analysis using a contaminated binormal ROC model

    NASA Astrophysics Data System (ADS)

    Berbaum, Kevin S.; Dorfman, Donald D.

    2001-06-01

    Receiver operating characteristic (ROC) data with false positive fractions of zero are often difficult to fit with standard ROC methodology, and are sometimes discarded. Some extreme examples of such data were analyzed. A new ROC model is proposed that assumes that for a proportion of abnormalities, no signal information is captured and that those abnormalities have the same distribution as noise along the latent decision axis. Rating reports of fracture for single view ankle radiographs were also analyzed with the binormal ROC model and two proper ROC models. The conventional models gave ROC area close to one, implying a true positive fraction close to one. The data contained no such fractions. When all false positive fractions were zero, conventional ROC areas gave little or no hint of unmistakable differences in true positive fractions. In contrast, the new model can fit ROC data in which some or all of the ROC points have false positive fractions of zero and true positive fractions less than one without concluding perfect performance. These data challenge the validity and robustness of conventional ROC models, but the contaminated binormal model accounts for these data. This research has been published for a different audience.

  6. Adaptative synchronization in multi-output fractional-order complex dynamical networks and secure communications

    NASA Astrophysics Data System (ADS)

    Mata-Machuca, Juan L.; Aguilar-López, Ricardo

    2018-01-01

    This work deals with the adaptative synchronization of complex dynamical networks with fractional-order nodes and its application in secure communications employing chaotic parameter modulation. The complex network is composed of multiple fractional-order systems with mismatch parameters and the coupling functions are given to realize the network synchronization. We introduce a fractional algebraic synchronizability condition (FASC) and a fractional algebraic identifiability condition (FAIC) which are used to know if the synchronization and parameters estimation problems can be solved. To overcome these problems, an adaptative synchronization methodology is designed; the strategy consists in proposing multiple receiver systems which tend to follow asymptotically the uncertain transmitters systems. The coupling functions and parameters of the receiver systems are adjusted continually according to a convenient sigmoid-like adaptative controller (SLAC), until the measurable output errors converge to zero, hence, synchronization between transmitter and receivers is achieved and message signals are recovered. Indeed, the stability analysis of the synchronization error is based on the fractional Lyapunov direct method. Finally, numerical results corroborate the satisfactory performance of the proposed scheme by means of the synchronization of a complex network consisting of several fractional-order unified chaotic systems.

  7. THEORETICAL AND EXPERIMENTAL ASPECTS OF ISOTOPIC FRACTIONATION.

    USGS Publications Warehouse

    O'Neil, James R.

    1986-01-01

    Essential to the interpretation of natural variations of light stable isotope ratios is knowledge of the magnitude and temperature dependence of isotopic fractionation factors between the common minerals and fluids. These fractionation factors are obtained in three ways: (1) Semi-empirical calculations using spectroscopic data and the methods of statistical mechanics. (2) Laboratory calibration studies. (3) Measurements of natural samples whose formation conditions are well-known or highly constrained. In this chapter methods (1) and (2) are evaluated and a review is given of the present state of knowledge of the theory of isotopic fractionation and the fraction that influence the isotopic properties of minerals.

  8. Enhanced thermoelectric response in the fractional quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Roura-Bas, Pablo; Arrachea, Liliana; Fradkin, Eduardo

    2018-02-01

    We study the linear thermoelectric response of a quantum dot embedded in a constriction of a quantum Hall bar with fractional filling factors ν =1 /m within Laughlin series. We calculate the figure of merit Z T for the maximum efficiency at a fixed temperature difference. We find a significant enhancement of this quantity in the fractional filling in relation to the integer-filling case, which is a direct consequence of the fractionalization of the electron in the fractional quantum Hall state. We present simple theoretical expressions for the Onsager coefficients at low temperatures, which explicitly show that Z T and the Seebeck coefficient increase with m .

  9. PCB congener analysis with Hall electrolytic conductivity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edstrom, R.D.

    1989-01-01

    This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less

  10. Generalized hydrodynamic correlations and fractional memory functions

    NASA Astrophysics Data System (ADS)

    Rodríguez, Rosalio F.; Fujioka, Jorge

    2015-12-01

    A fractional generalized hydrodynamic (GH) model of the longitudinal velocity fluctuations correlation, and its associated memory function, for a complex fluid is analyzed. The adiabatic elimination of fast variables introduces memory effects in the transport equations, and the dynamic of the fluctuations is described by a generalized Langevin equation with long-range noise correlations. These features motivate the introduction of Caputo time fractional derivatives and allows us to calculate analytic expressions for the fractional longitudinal velocity correlation function and its associated memory function. Our analysis eliminates a spurious constant term in the non-fractional memory function found in the non-fractional description. It also produces a significantly slower power-law decay of the memory function in the GH regime that reduces to the well-known exponential decay in the non-fractional Navier-Stokes limit.

  11. Standard model predictions for B→Kℓ(+)ℓ- with form factors from lattice QCD.

    PubMed

    Bouchard, Chris; Lepage, G Peter; Monahan, Christopher; Na, Heechang; Shigemitsu, Junko

    2013-10-18

    We calculate, for the first time using unquenched lattice QCD form factors, the standard model differential branching fractions dB/dq2(B→Kℓ(+)ℓ(-)) for ℓ=e, μ, τ and compare with experimental measurements by Belle, BABAR, CDF, and LHCb. We report on B(B→Kℓ(+)ℓ(-)) in q2 bins used by experiment and predict B(B→Kτ(+)τ(-))=(1.41±0.15)×10(-7). We also calculate the ratio of branching fractions R(e)(μ)=1.00029(69) and predict R(ℓ)(τ)=1.176(40), for ℓ=e, μ. Finally, we calculate the "flat term" in the angular distribution of the differential decay rate F(H)(e,μ,τ) in experimentally motivated q2 bins.

  12. Evaluation and mitigation of the interplay effects of intensity modulated proton therapy for lung cancer in a clinical setting.

    PubMed

    Kardar, Laleh; Li, Yupeng; Li, Xiaoqiang; Li, Heng; Cao, Wenhua; Chang, Joe Y; Liao, Li; Zhu, Ronald X; Sahoo, Narayan; Gillin, Michael; Liao, Zhongxing; Komaki, Ritsuko; Cox, James D; Lim, Gino; Zhang, Xiaodong

    2014-01-01

    The primary aim of this study was to evaluate the impact of the interplay effects of intensity modulated proton therapy (IMPT) plans for lung cancer in the clinical setting. The secondary aim was to explore the technique of isolayered rescanning to mitigate these interplay effects. A single-fraction 4-dimensional (4D) dynamic dose without considering rescanning (1FX dynamic dose) was used as a metric to determine the magnitude of dosimetric degradation caused by 4D interplay effects. The 1FX dynamic dose was calculated by simulating the machine delivery processes of proton spot scanning on a moving patient, described by 4D computed tomography during IMPT delivery. The dose contributed from an individual spot was fully calculated on the respiratory phase that corresponded to the life span of that spot, and the final dose was accumulated to a reference computed tomography phase by use of deformable image registration. The 1FX dynamic dose was compared with the 4D composite dose. Seven patients with various tumor volumes and motions were selected for study. The clinical target volume (CTV) prescription coverage for the 7 patients was 95.04%, 95.38%, 95.39%, 95.24%, 95.65%, 95.90%, and 95.53% when calculated with the 4D composite dose and 89.30%, 94.70%, 85.47%, 94.09%, 79.69%, 91.20%, and 94.19% when calculated with the 1FX dynamic dose. For these 7 patients, the CTV coverage calculated by use of a single-fraction dynamic dose was 95.52%, 95.32%, 96.36%, 95.28%, 94.32%, 95.53%, and 95.78%, with a maximum monitor unit limit value of 0.005. In other words, by increasing the number of delivered spots in each fraction, the degradation of CTV coverage improved up to 14.6%. A single-fraction 4D dynamic dose without rescanning was validated as a surrogate to evaluate the interplay effects of IMPT for lung cancer in the clinical setting. The interplay effects potentially can be mitigated by increasing the amount of isolayered rescanning in each fraction delivery.

  13. Evaluation of structure from motion for soil microtopography measurement

    USDA-ARS?s Scientific Manuscript database

    Recent developments in low cost structure from motion (SFM) technologies offer new opportunities for geoscientists to acquire high resolution soil microtopography data at a fraction of the cost of conventional techniques. However, these new methodologies often lack easily accessible error metrics an...

  14. A partially coupled, fraction-by-fraction modelling approach to the subsurface migration of gasoline spills

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Niemi, A.

    2007-01-01

    The subsurface spreading behaviour of gasoline, as well as several other common soil- and groundwater pollutants (e.g. diesel, creosote), is complicated by the fact that it is a mixture of hundreds of different constituents, behaving differently with respect to e.g. dissolution, volatilisation, adsorption and biodegradation. Especially for scenarios where the non-aqueous phase liquid (NAPL) phase is highly mobile, such as for sudden spills in connection with accidents, it is necessary to simultaneously analyse the migration of the NAPL and its individual components in order to assess risks and environmental impacts. Although a few fully coupled, multi-phase, multi-constituent models exist, such models are highly complex and may be time consuming to use. A new, somewhat simplified methodology for modelling the subsurface migration of gasoline while taking its multi-constituent nature into account is therefore introduced here. Constituents with similar properties are grouped together into eight fractions. The migration of each fraction in the aqueous and gaseous phases as well as adsorption is modelled separately using a single-constituent multi-phase flow model, while the movement of the free-phase gasoline is essentially the same for all fractions. The modelling is done stepwise to allow updating of the free-phase gasoline composition at certain time intervals. The output is the concentration of the eight different fractions in the aqueous, gaseous, free gasoline and solid phases with time. The approach is evaluated by comparing it to a fully coupled multi-phase, multi-constituent numerical simulator in the modelling of a typical accident-type spill scenario, based on a tanker accident in northern Sweden. Here the PCFF method produces results similar to those of the more sophisticated, fully coupled model. The benefit of the method is that it is easy to use and can be applied to any single-constituent multi-phase numerical simulator, which in turn may have different strengths in incorporating various processes. The results demonstrate that the different fractions have significantly different migration behaviours and although the methodology involves some simplifications, it is a considerable improvement compared to modelling the gasoline constituents completely individually or as one single mixture.

  15. The co-evolution of total density profiles and central dark matter fractions in simulated early-type galaxies

    NASA Astrophysics Data System (ADS)

    Remus, Rhea-Silvia; Dolag, Klaus; Naab, Thorsten; Burkert, Andreas; Hirschmann, Michaela; Hoffmann, Tadziu L.; Johansson, Peter H.

    2017-01-01

    We present evidence from cosmological hydrodynamical simulations for a co-evolution of the slope of the total (dark and stellar) mass density profile, γtot, and the dark matter fraction within the half-mass radius, fDM, in early-type galaxies. The relation can be described as γtot = A fDM + B for all systems at all redshifts. The trend is set by the decreasing importance of gas dissipation towards lower redshifts and for more massive systems. Early-type galaxies are smaller, more concentrated, have lower fDM and steeper γtot at high redshifts and at lower masses for a given redshift; fDM and γtot are good indicators for growth by `dry' merging. The values for A and B change distinctively for different feedback models, and this relation can be used as a test for such models. A similar correlation exists between γtot and the stellar mass surface density Σ*. A model with weak stellar feedback and feedback from black holes is in best agreement with observations. All simulations, independent of the assumed feedback model, predict steeper γtot and lower fDM at higher redshifts. While the latter is in agreement with the observed trends, the former is in conflict with lensing observations, which indicate constant or decreasing γtot. This discrepancy is shown to be artificial: the observed trends can be reproduced from the simulations using observational methodology to calculate the total density slopes.

  16. New methodologies for calculation of flight parameters on reduced scale wings models in wind tunnel =

    NASA Astrophysics Data System (ADS)

    Ben Mosbah, Abdallah

    In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their combination was optimized, and very good results were obtained in a reduced computing time. The validation of the obtained results has been made using numerical data obtained by the XFoil code, and also by the Fluent code. The results obtained using the methodologies presented in this thesis have been validated with experimental data obtained using the subsonic Price-Paidoussis blow down wind tunnel.

  17. Odor-active constituents of Cedrus atlantica wood essential oil.

    PubMed

    Uehara, Ayaka; Tommis, Basma; Belhassen, Emilie; Satrani, Badr; Ghanmi, Mohamed; Baldovini, Nicolas

    2017-12-01

    The main odorant constituents of Cedrus atlantica essential oil were characterized by GC-Olfactometry (GC-O), using the Aroma Extract Dilution Analysis (AEDA) methodology with 12 panelists. The two most potent odor-active constituents were vestitenone and 4-acetyl-1-methylcyclohexene. The identification of the odorants was realized by a detailed fractionation of the essential oil by liquid-liquid basic extraction, distillation and column chromatography, followed by the GC-MS and GC-O analyses of some fractions, and the synthesis of some non-commercial reference constituents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Re-irradiation stereotactic body radiotherapy for spinal metastases: a multi-institutional outcome analysis.

    PubMed

    Hashmi, Ahmed; Guckenberger, Matthias; Kersh, Ron; Gerszten, Peter C; Mantel, Frederick; Grills, Inga S; Flickinger, John C; Shin, John H; Fahim, Daniel K; Winey, Brian; Oh, Kevin; John Cho, B C; Létourneau, Daniel; Sheehan, Jason; Sahgal, Arjun

    2016-11-01

    OBJECTIVE This study is a multi-institutional pooled analysis specific to imaging-based local control of spinal metastases in patients previously treated with conventional external beam radiation therapy (cEBRT) and then treated with re-irradiation stereotactic body radiotherapy (SBRT) to the spine as salvage therapy, the largest such study to date. METHODS The authors reviewed cases involving 215 patients with 247 spinal target volumes treated at 7 institutions. Overall survival was calculated on a patient basis, while local control was calculated based on the spinal target volume treated, both using the Kaplan-Meier method. Local control was defined as imaging-based progression within the SBRT target volume. Equivalent dose in 2-Gy fractions (EQD2) was calculated for the cEBRT and SBRT course using an α/β of 10 for tumor and 2 for both spinal cord and cauda equina. RESULTS The median total dose/number of fractions of the initial cEBRT was 30 Gy/10. The median SBRT total dose and number of fractions were 18 Gy and 1, respectively. Sixty percent of spinal target volumes were treated with single-fraction SBRT (median, 16.6 Gy and EQD2/10 = 36.8 Gy), and 40% with multiple-fraction SBRT (median 24 Gy in 3 fractions, EQD2/10 = 36 Gy). The median time interval from cEBRT to re-irradiation SBRT was 13.5 months, and the median duration of patient follow-up was 8.1 months. Kaplan-Meier estimates of 6- and 12-month overall survival rates were 64% and 48%, respectively; 13% of patients suffered a local failure, and the 6- and 12-month local control rates were 93% and 83%, respectively. Multivariate analysis identified Karnofsky Performance Status (KPS) < 70 as a significant prognostic factor for worse overall survival, and single-fraction SBRT as a significant predictive factor for better local control. There were no cases of radiation myelopathy, and the vertebral compression fracture rate was 4.5%. CONCLUSIONS Re-irradiation spine SBRT is effective in yielding imaging-based local control with a clinically acceptable safety profile. A randomized trial would be required to determine the optimal fractionation.

  19. Dixon quantitative chemical shift MRI for bone marrow evaluation in the lumbar spine: a reproducibility study in healthy volunteers.

    PubMed

    Maas, M; Akkerman, E M; Venema, H W; Stoker, J; Den Heeten, G J

    2001-01-01

    The purpose of this work was to explore the reproducibility of fat-fraction measurements using Dixon quantitative chemical shift imaging (QCSI) in the lumbar spine (L3, L4, and L5) of healthy volunteers. Sixteen healthy volunteers were examined at 1.5 T two times to obtain a repeated measurement in the same slice and a third time in three parallel slices. Single slice, two point Dixon SE (TR/TE 2,500/22.3) sequences were used, from which fat-fraction images were calculated. The fat-fraction results are presented as averages over regions of interest, which were derived from the contours of the vertebrae. Reproducibility measures related to repeated measurements on different days, slice position, and contour drawing were calculated. The mean fat fraction was 0.37 (SD 0.08). The SD due to repeated measurement was small (sigmaR = 0.013-0.032), almost all of which can be explained by slice-(re)-positioning errors. When used to evaluate the same person longitudinally in time, Dixon QCSI fat-fraction measurement has an excellent reproducibility. It is a powerful noninvasive tool in the evaluation of bone marrow composition.

  20. On energetic prerequisites of attracting electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundholm, Dage

    The internal reorganization energy and the zero-point vibrational energy (ZPE) of fractionally charged molecules embedded in molecular materials are discussed. The theory for isolated open quantum systems is taken as the starting point. It is shown that for isolated molecules the internal reorganization-energy function and its slope, i.e., the chemical potential of an open molecular system are monotonically decreasing functions with respect to increasing amount of negative excess charge (q) in the range of q = [0, 1]. Calculations of the ZPE for fractionally charged molecules show that the ZPE may have a minimum for fractional occupation. The calculations showmore » that the internal reorganization energy and changes in the ZPE are of the same order of magnitude with different behavior as a function of the excess charge. The sum of the contributions might favor molecules with fractional occupation of the molecular units and partial delocalization of the excess electrons in solid-state materials also when considering Coulomb repulsion between the excess electrons. The fractional electrons are then coherently distributed on many molecules of the solid-state material forming a condensate of attracting electrons, which is crucial for the superconducting state.« less

  1. Determining the charged fractions of 218Po and 214Pb using an environmental gamma-ray and Rn detector.

    PubMed

    Maiello, M L; Harley, N H

    1989-07-01

    The rate of 218Po and 214Pb atoms collected electrostatically inside an environmental gamma-ray and 222Rn detector (EGARD) was measured. These measurements were used to directly infer the charged fraction of 218Po and to calculate the charged fraction of 214Pb. Thirty-two percent of the 218Po was collected electrostatically using approximately -1500 V on a 2.54 cm diameter Mylar covered disc inside a vented A1 EGARD of 1 L volume. About 91% of the 214Pb is collected electrostatically under the same conditions. The measurements were performed in a calibrated 222Rn test chamber at the Environmental Measurements Laboratory (EML) using the Thomas alpha-counting method with 222Rn concentrations averaging about 4300 Bq m-3. The atomic collection rates were used with other measured quantities to calculate the thermoluminescent dosimeter (TLD) signal acquired from EGARD for exposure to 1 Bq m-3 of 222Rn. The calculations account for 222Rn progeny collection using a Teflon electret and alpha and beta detection using TLDs inside EGARD. The measured quantities include the energies of 218Po and 214Po alpha-particles degraded by passage through the 25 microns thick electret. The TLD responses to these alpha- and beta-particles with an average energy approaching that obtained from the combined spectra of 214Pb and 214Bi were also measured. The calculated calibration factor is within 30% of the value obtained by exposing EGARD to a known concentration of 222Rn. This result supports our charged fraction estimates for 218Po and 214Pb.

  2. Mittag-Leffler synchronization of delayed fractional-order bidirectional associative memory neural networks with discontinuous activations: state feedback control and impulsive control schemes.

    PubMed

    Ding, Xiaoshuai; Cao, Jinde; Zhao, Xuan; Alsaadi, Fuad E

    2017-08-01

    This paper is concerned with the drive-response synchronization for a class of fractional-order bidirectional associative memory neural networks with time delays, as well as in the presence of discontinuous activation functions. The global existence of solution under the framework of Filippov for such networks is firstly obtained based on the fixed-point theorem for condensing map. Then the state feedback and impulsive controllers are, respectively, designed to ensure the Mittag-Leffler synchronization of these neural networks and two new synchronization criteria are obtained, which are expressed in terms of a fractional comparison principle and Razumikhin techniques. Numerical simulations are presented to validate the proposed methodologies.

  3. Teaching Mathematics with Technology: Numerical Relationships.

    ERIC Educational Resources Information Center

    Bright, George W.

    1989-01-01

    Developing numerical relationships with calculators is emphasized. Calculators furnish some needed support for students as they investigate the value of fractions as the numerators or denominators change. An example with Logo programing for computers is also included. (MNS)

  4. Simplified Numerical Description of SPT Operations

    NASA Technical Reports Server (NTRS)

    Manzella, David H.

    1995-01-01

    A simplified numerical model of the plasma discharge within the SPT-100 stationary plasma thruster was developed to aid in understanding thruster operation. A one dimensional description was used. Non-axial velocities were neglected except for the azimuthal electron velocity. A nominal operating condition of 4.5 mg/s of xenon anode flow was considered with 4.5 Amperes of discharge current, and a peak radial magnetic field strength of 130 Gauss. For these conditions, the calculated results indicated ionization fractions of 0.99 near the thruster exit with a potential drop across the discharge of approximately 250 Volts. Peak calculated electron temperatures were found to be sensitive to the choice of total ionization cross section for ionization of atomic xenon by electron bombardment and ranged from 51 eV to 60 eV. The calculated ionization fraction, potential drop, and electron number density agree favorably with previous experiments. Calculated electron temperatures are higher than previously measured.

  5. Analytical Solutions, Moments, and Their Asymptotic Behaviors for the Time-Space Fractional Cable Equation

    NASA Astrophysics Data System (ADS)

    Li, Can; Deng, Wei-Hua

    2014-07-01

    Following the fractional cable equation established in the letter [B.I. Henry, T.A.M. Langlands, and S.L. Wearne, Phys. Rev. Lett. 100 (2008) 128103], we present the time-space fractional cable equation which describes the anomalous transport of electrodiffusion in nerve cells. The derivation is based on the generalized fractional Ohm's law; and the temporal memory effects and spatial-nonlocality are involved in the time-space fractional model. With the help of integral transform method we derive the analytical solutions expressed by the Green's function; the corresponding fractional moments are calculated; and their asymptotic behaviors are discussed. In addition, the explicit solutions of the considered model with two different external current injections are also presented.

  6. Analysis of White Matter Damage in Patients with Multiple Sclerosis via a Novel In Vivo MR Method for Measuring Myelin, Axons, and G-Ratio.

    PubMed

    Hagiwara, A; Hori, M; Yokoyama, K; Nakazawa, M; Ueda, R; Horita, M; Andica, C; Abe, O; Aoki, S

    2017-10-01

    Myelin and axon volume fractions can now be estimated via MR imaging in vivo, as can the g-ratio, which equals the ratio of the inner to the outer diameter of a nerve fiber. The purpose of this study was to evaluate WM damage in patients with MS via this novel MR imaging technique. Twenty patients with relapsing-remitting MS with a combined total of 149 chronic plaques were analyzed. Myelin volume fraction was calculated based on simultaneous tissue relaxometry. Intracellular and CSF compartment volume fractions were quantified via neurite orientation dispersion and density imaging. Axon volume fraction and g-ratio were calculated by combining these measurements. Myelin and axon volume fractions and g-ratio were measured in plaques, periplaque WM, and normal-appearing WM. All metrics differed significantly across the 3 groups ( P < .001, except P = .027 for g-ratio between periplaque WM and normal-appearing WM). Those in plaques differed most from those in normal-appearing WM. The percentage changes in plaque and periplaque WM metrics relative to normal-appearing WM were significantly larger in absolute value for myelin volume fraction than for axon volume fraction and g-ratio ( P < .001, except P = .033 in periplaque WM relative to normal-appearing WM for comparison between myelin and axon volume fraction). In this in vivo MR imaging study, the myelin of WM was more damaged than axons in plaques and periplaque WM of patients with MS. Myelin and axon volume fractions and g-ratio may potentially be useful for evaluating WM damage in patients with MS. © 2017 by American Journal of Neuroradiology.

  7. Dynamic Approximate Entropy Electroanatomic Maps Detect Rotors in a Simulated Atrial Fibrillation Model

    PubMed Central

    Ugarte, Juan P.; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping. PMID:25489858

  8. Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel

    NASA Astrophysics Data System (ADS)

    Abdulhameed, M.; Vieru, D.; Roslan, R.

    2017-10-01

    This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.

  9. allantools: Allan deviation calculation

    NASA Astrophysics Data System (ADS)

    Wallin, Anders E. E.; Price, Danny C.; Carson, Cantwell G.; Meynadier, Frédéric

    2018-04-01

    allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.

  10. Methodology of strength calculation under alternating stresses using the diagram of limiting amplitudes

    NASA Astrophysics Data System (ADS)

    Konovodov, V. V.; Valentov, A. V.; Kukhar, I. S.; Retyunskiy, O. Yu; Baraksanov, A. S.

    2016-08-01

    The work proposes the algorithm to calculate strength under alternating stresses using the developed methodology of building the diagram of limiting stresses. The overall safety factor is defined by the suggested formula. Strength calculations of components working under alternating stresses in the great majority of cases are conducted as the checking ones. It is primarily explained by the fact that the overall fatigue strength reduction factor (Kσg or Kτg) can only be chosen approximately during the component design as the engineer at this stage of work has just the approximate idea on the component size and shape.

  11. Methodology for calculating power consumption of planetary mixers

    NASA Astrophysics Data System (ADS)

    Antsiferov, S. I.; Voronov, V. P.; Evtushenko, E. I.; Yakovlev, E. A.

    2018-03-01

    The paper presents the methodology and equations for calculating the power consumption necessary to overcome the resistance of a dry mixture caused by the movement of cylindrical rods in the body of a planetary mixer, as well as the calculation of the power consumed by idling mixers of this type. The equations take into account the size and physico-mechanical properties of mixing material, the size and shape of the mixer's working elements and the kinematics of its movement. The dependence of the power consumption on the angle of rotation in the plane perpendicular to the axis of rotation of the working member is presented.

  12. Equilibrium lithium isotope fractionation in Li-rich minerals

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, Y.; Liu, J.

    2017-12-01

    Lithium is the lightest alkali metal, and only exhibits +1 valence state in minerals. It is widely distributed on the Earth, and usually substitutes for Mg in silicate minerals. Li has two stable isotopes, 6Li and 7Li, with the relative abundances of 7.52% and 92.48%, respectively. The large mass difference between 6Li and 7Li could induce significant isotope fractionation in minerals. Li isotopes can provide an important geochemical tracer for mantle processes. However, the fractionation factors for Li in most minerals remain poorly known, which makes the geochemical implications of Li isotope fractionations in minerals difficult to assess. Here, we try to use the vibrational frequencies obtained by the first-principles methods based on density-functional theory to calculate the Li isotope fractionation parameters for amblygonite (LiAlPO4F), bikitaite (LiSi2AlO7H2), eucryptite (LiAlSiO4), lithiophilite (LiMnPO4), lithiophosphate (Li3PO4), montebrasite (LiAlPO5H), and spodumene (LiAlSi2O6) in the temperature range of 0-1200 ºC. For forsterite (Mg2SiO4) and diopside (CaMgSi2O6) in which Li takes the place of Mg, the equilibrium Li isotope fractionation between them also be studied. Our preliminary calculations show that the coordination number of Li seems to play an important role in controlling Li isotope fractionation in these minerals, and concentration of Li in forsterite and diopside seems to have great effects on Li isotope fractionation factors of them.

  13. Evaluation and Validation of the Messinger Freezing Fraction

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    One of the most important non-dimensional parameters used in ice-accretion modeling and scaling studies is the freezing fraction defined by the heat-balance analysis of Messinger. For fifty years this parameter has been used to indicate how rapidly freezing takes place when super-cooled water strikes a solid body. The value ranges from 0 (no freezing) to 1 (water freezes immediately on impact), and the magnitude has been shown to play a major role in determining the physical appearance of the accreted ice. Because of its importance to ice shape, this parameter and the physics underlying the expressions used to calculate it have been questioned from time to time. Until now, there has been no strong evidence either validating or casting doubt on the current expressions. This paper presents experimental measurements of the leading-edge thickness of a number of ice shapes for a variety of test conditions with nominal freezing fractions from 0.3 to 1.0. From these thickness measurements, experimental freezing fractions were calculated and compared with values found from the Messinger analysis as applied by Ruff. Within the experimental uncertainty of measuring the leading-edge thickness, agreement of the experimental and analytical freezing fraction was very good. It is also shown that values of analytical freezing fraction were entirely consistent with observed ice shapes at and near rime conditions: At an analytical freezing fraction of unity, experimental ice shapes displayed the classic rime shape, while for conditions producing analytical freezing fractions slightly lower than unity, glaze features started to appear.

  14. The Sunyaev-Zel'dovich Effect in Abell 370

    NASA Technical Reports Server (NTRS)

    Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Holzapfel, William L.; Cooray, Asantha K.

    1999-01-01

    We present interferometric measurements of the Sunyaev-Zel'dovich (SZ) effect towards the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas is strongly aspherical, on agreement with the morphology revealed by x-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction by comparing the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods. The Hubble constant derived for this cluster, when the known systematic uncertainties are included, has a very wide range of values and therefore does not provide additional constraints on the validity of the assumptions. We examine carefully the possible systematic errors in the gas fraction measurement. The gas fraction is a lower limit to the cluster's baryon fraction and so we compare the gas mass fraction, calibrated by numerical simulations to approximately the virial radius, to measurements of the global mass fraction of baryonic matter, OMEGA(sub B)/OMEGA(sub matter). Our lower limit to the cluster baryon fraction is f(sub B) = (0.043 +/- 0.014)/h (sub 100). From this, we derive an upper limit to the universal matter density, OMEGA(sub matter) <= 0.72/h(sub 100), and a likely value of OMEGA(sub matter) <= (0.44(sup 0.15, sub -0.12)/h(sub 100).

  15. A single field of view method for retrieving tropospheric temperature profiles from cloud-contaminated radiance data

    NASA Technical Reports Server (NTRS)

    Hodges, D. B.

    1976-01-01

    An iterative method is presented to retrieve single field of view (FOV) tropospheric temperature profiles directly from cloud-contaminated radiance data. A well-defined temperature profile may be calculated from the radiative transfer equation (RTE) for a partly cloudy atmosphere when the average fractional cloud amount and cloud-top height for the FOV are known. A cloud model is formulated to calculate the fractional cloud amount from an estimated cloud-top height. The method is then examined through use of simulated radiance data calculated through vertical integration of the RTE for a partly cloudy atmosphere using known values of cloud-top height(s) and fractional cloud amount(s). Temperature profiles are retrieved from the simulated data assuming various errors in the cloud parameters. Temperature profiles are retrieved from NOAA-4 satellite-measured radiance data obtained over an area dominated by an active cold front and with considerable cloud cover and compared with radiosonde data. The effects of using various guessed profiles and the number of iterations are considered.

  16. Calculation of three-dimensional (3-D) internal flow by means of the velocity-vorticity formulation on a staggered grid

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1995-01-01

    A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.

  17. Repeating Decimals: An Alternative Teaching Approach

    ERIC Educational Resources Information Center

    Appova, Aina K.

    2017-01-01

    To help middle school students make better sense of decimals and fraction, the author and an eighth-grade math teacher worked on a 90-minute lesson that focused on representing repeating decimals as fractions. They embedded experimentations and explorations using technology and calculators to help promote students' intuitive and conceptual…

  18. Theoretical estimation of equilibrium sulfur isotope fractionations among aqueous sulfite species: Implications for isotope models of microbial sulfate reduction

    NASA Astrophysics Data System (ADS)

    Eldridge, D. L.; Farquhar, J.; Guo, W.

    2015-12-01

    Sulfite (sensu lato), an intermediate in a variety sulfur redox processes, plays a particularly important role in microbial sulfate reduction. It exists intracellularly as multiple species between sets of enzymatic reactions that transform sulfate to sulfide, with the exact speciation depending on pH, T, and ionic strength. However, the complex speciation of sulfite is ignored in current isotope partitioning models of microbial sulfate reduction and simplified solely to the pyramidal SO32- (sulfite sensu stricto), due to a lack of appropriate constraints. We theoretically estimated the equilibrium sulfur isotope fractionations (33S/32S, 34S/32S, 36S/32S) among all documented sulfite species in aqueous solution, including sulfite (SO32-), bisulfite isomers and dimers ((HS)O3-, (HO)SO2-, S2O52-), and SO2(aq), through first principles quantum mechanical calculations. The calculations were performed at B3LYP/6-31+G(d,p) level using cluster models with 30-40 water molecules surrounding the solute. Our calculated equilibrium fractionation factors compare well to the available experimental constraints and suggest that the minor and often-ignored tetrahedral (HS)O3- isomer of bisulfite strongly influences isotope partitioning behavior in the sulfite system under most environmentally relevant conditions, particularly fractionation magnitudes and unusual temperature dependence. For example, we predict that sulfur isotope fractionation between sulfite and bulk bisulfite in solution should have an apparent inverse temperature dependence due to the influence of (HS)O3- and its increased stability at higher temperatures. Our findings highlight the need to appropriately account for speciation/isomerization of sulfur species in sulfur isotope studies. We will also present similar calculation results of other aqueous sulfur compounds (e.g., H2S/HS-, SO42-, S2O32-, S3O62-, and poorly documented SO22- species), and discuss the implication of our results for microbial sulfate reduction models and other sulfur-redox processes in nature.

  19. Development and application of a hybrid transport methodology for active interrogation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, K.; Walters, W.; Haghighat, A.

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less

  20. Indirect calculation of monoclonal antibodies in nanoparticles using the radiolabeling process with technetium 99 metastable as primary factor: Alternative methodology for the entrapment efficiency.

    PubMed

    Helal-Neto, Edward; Cabezas, Santiago Sánchez; Sancenón, Félix; Martínez-Máñez, Ramón; Santos-Oliveira, Ralph

    2018-05-10

    The use of monoclonal antibodies (Mab) in the current medicine is increasing. Antibody-drug conjugates (ADCs) represents an increasingly and important modality for treating several types of cancer. In this area, the use of Mab associated with nanoparticles is a valuable strategy. However, the methodology used to calculate the Mab entrapment, efficiency and content is extremely expensive. In this study we developed and tested a novel very simple one-step methodology to calculate monoclonal antibody entrapment in mesoporous silica (with magnetic core) nanoparticles using the radiolabeling process as primary methodology. The magnetic core mesoporous silica were successfully developed and characterised. The PXRD analysis at high angles confirmed the presence of magnetic cores in the structures and transmission electron microscopy allowed to determine structures size (58.9 ± 8.1 nm). From the isotherm curve, a specific surface area of 872 m 2 /g was estimated along with a pore volume of 0.85 cm 3 /g and an average pore diameter of 3.15 nm. The radiolabeling process to proceed the indirect determination were well-done. Trastuzumab were successfully labeled (>97%) with Tc-99m generating a clear suspension. Besides, almost all the Tc-99m used (labeling the trastuzumab) remained trapped in the surface of the mesoporous silica for a period as long as 8 h. The indirect methodology demonstrated a high entrapment in magnetic core mesoporous silica surface of Tc-99m-traztuzumab. The results confirmed the potential use from the indirect entrapment efficiency methodology using the radiolabeling process, as a one-step, easy and cheap methodology. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Introducing a new bond reactivity index: Philicities for natural bond orbitals.

    PubMed

    Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel

    2017-12-22

    In the present work, a new methodology defined for obtaining reactivity indices (philicities) is proposed. This is based on reactivity functions such as the Fukui function or the dual descriptor, and makes it possible to project the information from reactivity functions onto molecular orbitals, instead of onto the atoms of the molecule (atomic reactivity indices). The methodology focuses on the molecules' natural bond orbitals (bond reactivity indices) because these orbitals have the advantage of being localized, allowing the reaction site of an electrophile or nucleophile to be determined within a very precise molecular region. This methodology provides a "philicity" index for every NBO, and a representative set of molecules has been used to test the new definition. A new methodology has also been developed to compare the "finite difference" and the "frontier molecular orbital" approximations. To facilitate their use, the proposed methodology as well as the possibility of calculating the new indices have been implemented in a new version of UCA-FUKUI software. In addition, condensation schemes based on atomic populations of the "atoms in molecules" theory, the Hirshfeld population analysis, the approximation of Mulliken (with a minimal basis set) and electrostatic potential-derived charges have also been implemented, including the calculation of "bond reactivity indices" defined in previous studies. Graphical abstract A new methodology defined for obtaining bond reactivity indices (philicities) is proposed and makes it possible to project the information from reactivity functions onto molecular orbitals. The proposed methodology as well as the possibility of calculating the new indices have been implemented in a new version of UCA-FUKUI software. In addition, this version can use new atomic condensation schemes and new "utilities" have also been included in this second version.

  2. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  3. Removal of Antibiotics in Biological Wastewater Treatment Systems-A Critical Assessment Using the Activated Sludge Modeling Framework for Xenobiotics (ASM-X).

    PubMed

    Polesel, Fabio; Andersen, Henrik R; Trapp, Stefan; Plósz, Benedek Gy

    2016-10-04

    Many scientific studies present removal efficiencies for pharmaceuticals in laboratory-, pilot-, and full-scale wastewater treatment plants, based on observations that may be impacted by theoretical and methodological approaches used. In this Critical Review, we evaluated factors influencing observed removal efficiencies of three antibiotics (sulfamethoxazole, ciprofloxacin, tetracycline) in pilot- and full-scale biological treatment systems. Factors assessed include (i) retransformation to parent pharmaceuticals from e.g., conjugated metabolites and analogues, (ii) solid retention time (SRT), (iii) fractions sorbed onto solids, and (iv) dynamics in influent and effluent loading. A recently developed methodology was used, relying on the comparison of removal efficiency predictions (obtained with the Activated Sludge Model for Xenobiotics (ASM-X)) with representative measured data from literature. By applying this methodology, we demonstrated that (a) the elimination of sulfamethoxazole may be significantly underestimated when not considering retransformation from conjugated metabolites, depending on the type (urban or hospital) and size of upstream catchments; (b) operation at extended SRT may enhance antibiotic removal, as shown for sulfamethoxazole; (c) not accounting for fractions sorbed in influent and effluent solids may cause slight underestimation of ciprofloxacin removal efficiency. Using tetracycline as example substance, we ultimately evaluated implications of effluent dynamics and retransformation on environmental exposure and risk prediction.

  4. On-line hyphenation of centrifugal partition chromatography and high pressure liquid chromatography for the fractionation of flavonoids from Hippophaë rhamnoides L. berries.

    PubMed

    Michel, Thomas; Destandau, Emilie; Elfakir, Claire

    2011-09-09

    Centrifugal Partition Chromatography (CPC), a liquid-liquid preparative chromatography using two immiscible solvent systems, benefits from numerous advantages for the separation or purification of synthetic or natural products. This study presents the on-line hyphenation of CPC-Evaporative Light Scattering Detector (CPC-ELSD) with High Performance Liquid Chromatography-UV (HPLC-UV) for the fractionation of flavonols from a solvent-free microwave extract of sea buckthorn (Hippophaë rhamnoides L., Elaeagnaceae) berries. An Arizona G system was used for the fractionation of flavonoids by CPC and a fused core Halo C18 column allowed the on-line analyses of collected fractions by HPLC. The on-line CPC/HPLC procedure allowed the simultaneous fractionation step at preparative scale combined with the HPLC analyses which provide direct fingerprint of collected fractions. Thus the crude extract was simplified and immediate information on the composition of fractions could be obtained. Furthermore, this methodology reduced the time of post-fractionation steps and facilitated identification of main molecules by Mass Spectrometry (MS). Rutin, isorhamnetin-3-O-rutinoside, isorhamnetin-3-O-glucoside, quercetin-3-O-glucoside, isorhamnetin-rhamnoside, quercetin and isorhamnetin were identified. CPC-ELSD/HPLC-UV could be considered as a high-throughput technique for the guided fractionation of bioactive natural products from complex crude extracts. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Renal blood flow using arterial spin labelling MRI and calculated filtration fraction in healthy adult kidney donors Pre-nephrectomy and post-nephrectomy.

    PubMed

    Cutajar, Marica; Hilton, Rachel; Olsburgh, Jonathon; Marks, Stephen D; Thomas, David L; Banks, Tina; Clark, Christopher A; Gordon, Isky

    2015-08-01

    Renal plasma flow (RPF) (derived from renal blood flow, RBF) and glomerular filtration rate (GFR) allow the determination of the filtration fraction (FF), which may have a role as a non-invasive renal biomarker. This is a hypothesis-generating pilot study assessing the effect of nephrectomy on renal function in healthy kidney donors. Eight living kidney donors underwent arterial spin labelling (ASL) magnetic resonance imaging (MRI) and GFR measurement prior to and 1 year after nephrectomy. Chromium-51 labelled ethylenediamine tetraacetic acid ((51)Cr-EDTA) with multi-blood sampling was undertaken and GFR calculated. The RBF and GFR obtained were used to calculate FF. All donors showed an increase in single kidney GFR of 24 - 75 %, and all but two showed an increase in FF (-7 to +52 %) after nephrectomy. The increase in RBF, and hence RPF, post-nephrectomy was not as great as the increase in GFR in seven out of eight donors. As with any pilot study, the small number of donors and their relatively narrow age range are potential limiting factors. The ability to measure RBF, and hence RPF, non-invasively, coupled with GFR measurement, allows calculation of FF, a biomarker that might provide a sensitive indicator of loss of renal reserve in potential donors. • Non-invasive MRI measured renal blood flow and calculated renal plasma flow. • Effect of nephrectomy on blood flow and filtration in donors is presented. • Calculated filtration fraction may be a useful new kidney biomarker.

  6. The effects of intra-fraction organ motion on the delivery of intensity-modulated field with a multileaf collimator.

    PubMed

    Chui, Chen-Shou; Yorke, Ellen; Hong, Linda

    2003-07-01

    Intensity-modulated radiation therapy can be conveniently delivered with a multileaf collimator. With this method, the entire field is not delivered at once, but rather it is composed of many subfields defined by the leaf positions as a function of beam on time. At any given instant, only these subfields are delivered. During treatment, if the organ moves, part of the volume may move in or out of these subfields. Due to this interplay between organ motion and leaf motion the delivered dose may be different from what was planned. In this work, we present a method that calculates the effects of organ motion on delivered dose. The direction of organ motion may be parallel or perpendicular to the leaf motion, and the effect can be calculated for a single fraction or for multiple fractions. Three breast patients and four lung patients were included in this study,with the amplitude of the organ motion varying from +/- 3.5 mm to +/- 10 mm, and the period varying from 4 to 8 seconds. Calculations were made for these patients with and without organ motion, and results were examined in terms of isodose distribution and dose volume histograms. Each calculation was repeated ten times in order to estimate the statistical uncertainties. For selected patients, calculations were also made with conventional treatment technique. The effects of organ motion on conventional techniques were compared relative to that on IMRT techniques. For breast treatment, the effect of organ motion primarily broadened the penumbra at the posterior field edge. The dose in the rest of the treatment volume was not significantly affected. For lung treatment, the effect also broadened the penumbra and degraded the coverage of the planning target volume (PTV). However, the coverage of the clinical target volume (CTV) was not much affected, provided the PTV margin was adequate. The same effects were observed for both IMRT and conventional treatment techniques. For the IMRT technique, the standard deviations of ten samples of a 30-fraction calculation were very small for all patients, implying that over a typical treatment course of 30 fractions, the delivered dose was very close to the expected value. Hence, under typical clinical conditions, the effect of organ motion on delivered dose can be calculated without considering the interplay between the organ motion and the leaf motion. It can be calculated as the weighted average of the dose distribution without organ motion with the distribution of organ motion. Since the effects of organ motion on dose were comparable for both IMRT and conventional techniques, the PTV margin should remain the same for both techniques.

  7. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI.

    PubMed

    Viehweger, Adrian; Riffert, Till; Dhital, Bibek; Knösche, Thomas R; Anwander, Alfred; Stepan, Holger; Sorge, Ina; Hirsch, Wolfgang

    2014-10-01

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm(2). Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R(2) = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state.

  8. RADIOCARBON MEASUREMENTS ON PM 2.5 AMBIENT AEROSOL FROM NASHVILLE, TN

    EPA Science Inventory

    Radiocarbon (Carbon-14) measurements provide an estimate of the fraction of carbon in a sample that is biogenic. The methodology has been extensively used in past wintertime studies to quantify the contribution of wood smoke to ambient aerosol. In summertime such measurements...

  9. RADIOCARBON MEASUREMENTS ON PM-2.5 AMBIENT AEROSOL

    EPA Science Inventory

    Radiocarbon (14C) measurements provide an estimate of the fraction of carbon in a sample that is biogenic. The methodology has been extensively used in past wintertime studies to quantify the contribution of wood smoke to ambient aerosol. In summertime such measurements can p...

  10. Design of broadband time-domain impedance boundary conditions using the oscillatory-diffusive representation of acoustical models.

    PubMed

    Monteghetti, Florian; Matignon, Denis; Piot, Estelle; Pascal, Lucas

    2016-09-01

    A methodology to design broadband time-domain impedance boundary conditions (TDIBCs) from the analysis of acoustical models is presented. The derived TDIBCs are recast exclusively as first-order differential equations, well-suited for high-order numerical simulations. Broadband approximations are yielded from an elementary linear least squares optimization that is, for most models, independent of the absorbing material geometry. This methodology relies on a mathematical technique referred to as the oscillatory-diffusive (or poles and cuts) representation, and is applied to a wide range of acoustical models, drawn from duct acoustics and outdoor sound propagation, which covers perforates, semi-infinite ground layers, as well as cavities filled with a porous medium. It is shown that each of these impedance models leads to a different TDIBC. Comparison with existing numerical models, such as multi-pole or extended Helmholtz resonator, provides insights into their suitability. Additionally, the broadly-applicable fractional polynomial impedance models are analyzed using fractional calculus.

  11. Enantiomer fractions of polychlorinated biphenyls in three selected Standard Reference Materials.

    PubMed

    Morrissey, Joshua A; Bleackley, Derek S; Warner, Nicholas A; Wong, Charles S

    2007-01-01

    The enantiomer composition of six chiral polychlorinated biphenyls (PCBs) were measured in three different certified Standard Reference Materials (SRMs) from the US National Institute of Standards and Technology (NIST): SRM 1946 (Lake Superior fish tissue), SRM 1939a (PCB Congeners in Hudson River Sediment), and SRM 2978 (organic contaminants in mussel tissue--Raritan Bay, New Jersey) to aid in quality assurance/quality control methodologies in the study of chiral pollutants in sediments and biota. Enantiomer fractions (EFs) of PCBs 91, 95, 136, 149, 174, and 183 were measured using a suite of chiral columns by gas chromatography/mass spectrometry. Concentrations of target analytes were in agreement with certified values. Target analyte EFs in reference materials were measured precisely (<2% relative standard deviation), indicating the utility of SRM in quality assurance/control methodologies for analyses of chiral compounds in environmental samples. Measured EFs were also in agreement with previously published analyses of similar samples, indicating that similar enantioselective processes were taking place in these environmental matrices.

  12. Methodology for worker neutron exposure evaluation in the PDCF facility design.

    PubMed

    Scherpelz, R I; Traub, R J; Pryor, K H

    2004-01-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.

  13. UW Inventory of Freight Emissions (WIFE3) heavy duty diesel vehicle web calculator methodology.

    DOT National Transportation Integrated Search

    2013-09-01

    This document serves as an overview and technical documentation for the University of Wisconsin Inventory of : Freight Emissions (WIFE3) calculator. The WIFE3 web calculator rapidly estimates future heavy duty diesel : vehicle (HDDV) roadway emission...

  14. TU-AB-303-06: Does Online Adaptive Radiation Therapy Mean Zero Margin for Intermediate-Risk Prostate Cancer? An Intra-Fractional Seminal Vesicles Motion Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Y; Li, T; Lee, W

    Purpose: To provide benchmark for seminal vesicles (SVs) margin selection to account for intra-fractional motion; and to investigate the effectiveness of two motion surrogates in predicting intra-fractional SV underdosage. Methods: 9 prostate SBRT patients were studied; each has five pairs of pre-treatment and post-treatment cone-beam CTs (CBCTs). Each pair of CBCTs was registered based on fiducial markers in the prostate. To provide “ground truth” for coverage evaluation, all pre-treatment SVs were expanded with isotropic margin of 1,2,3,5 and 8mm, and their overlap with post-treatment SVs were used to quantify intra-fractional coverage. Two commonly used motion surrogates, the center-of-mass (COM) andmore » the border of contour (the most distal points in SI/AP/LR directions) were evaluated using Receiver-Operating Characteristic (ROC) analyses for predicting SV underdosage due to intra-fractional motion. Action threshold of determining underdosage for each surrogate was calculated by selecting the optimal balancing between sensitivity and specificity. For comparison, margin for each surrogate was also calculated based on traditional margin recipe. Results: 90% post-treatment SV coverage can be achieved in 47%, 82%, 91%, 98% and 98% fractions for 1,2,3,5 and 8mm margins. 3mm margin ensured the 90% intra-fractional SV coverage in 90% fractions when prostate was aligned. The ROC analysis indicated the AUC for COM and border were 0.88 and 0.72. The underdosage threshold was 2.9mm for COM and 4.1mm for border. The Van Herk’s margin recipe recommended 0.5, 0 and 1.8mm margin in LR, AP and SI direction based on COM and for border, the corresponding margin was 2.1, 4.5 and 3mm. Conclusion: 3mm isotropic margin is the minimum required to mitigate the intra-fractional SV motion when prostate is aligned. ROC analysis reveals that both COM and border are acceptable predictors for SV underdosage with 2.9mm and 4.1mm action threshold. Traditional margin calculation is less reliable for this application. This work is partially supported a master research grant from Varian Medical Systems.« less

  15. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    PubMed Central

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  16. Weighted Association Rule Mining for Item Groups with Different Properties and Risk Assessment for Networked Systems

    NASA Astrophysics Data System (ADS)

    Kim, Jungja; Ceong, Heetaek; Won, Yonggwan

    In market-basket analysis, weighted association rule (WAR) discovery can mine the rules that include more beneficial information by reflecting item importance for special products. In the point-of-sale database, each transaction is composed of items with similar properties, and item weights are pre-defined and fixed by a factor such as the profit. However, when items are divided into more than one group and the item importance must be measured independently for each group, traditional weighted association rule discovery cannot be used. To solve this problem, we propose a new weighted association rule mining methodology. The items should be first divided into subgroups according to their properties, and the item importance, i.e. item weight, is defined or calculated only with the items included in the subgroup. Then, transaction weight is measured by appropriately summing the item weights from each subgroup, and the weighted support is computed as the fraction of the transaction weights that contains the candidate items relative to the weight of all transactions. As an example, our proposed methodology is applied to assess the vulnerability to threats of computer systems that provide networked services. Our algorithm provides both quantitative risk-level values and qualitative risk rules for the security assessment of networked computer systems using WAR discovery. Also, it can be widely used for new applications with many data sets in which the data items are distinctly separated.

  17. Toward a Galactic Distribution of Planets. I. Methodology and Planet Sensitivities of the 2015 High-cadence Spitzer Microlens Sample

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Udalski, A.; Calchi Novati, S.; Chung, S.-J.; Jung, Y. K.; Ryu, Y.-H.; Shin, I.-G.; Gould, A.; Lee, C.-U.; Albrow, M. D.; Yee, J. C.; Han, C.; Hwang, K.-H.; Cha, S.-M.; Kim, D.-J.; Kim, H.-W.; Kim, S.-L.; Kim, Y.-H.; Lee, Y.; Park, B.-G.; Pogge, R. W.; KMTNet Collaboration; Poleski, R.; Mróz, P.; Pietrukowicz, P.; Skowron, J.; Szymański, M. K.; KozLowski, S.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Beichman, C.; Bryden, G.; Carey, S.; Fausnaugh, M.; Gaudi, B. S.; Henderson, C. B.; Shvartzvald, Y.; Wibking, B.; Spitzer Team

    2017-11-01

    We analyze an ensemble of microlensing events from the 2015 Spitzer microlensing campaign, all of which were densely monitored by ground-based high-cadence survey teams. The simultaneous observations from Spitzer and the ground yield measurements of the microlensing parallax vector {{\\boldsymbol{π }}}{{E}}, from which compact constraints on the microlens properties are derived, including ≲25% uncertainties on the lens mass and distance. With the current sample, we demonstrate that the majority of microlenses are indeed in the mass range of M dwarfs. The planet sensitivities of all 41 events in the sample are calculated, from which we provide constraints on the planet distribution function. In particular, assuming a planet distribution function that is uniform in {log}q, where q is the planet-to-star mass ratio, we find a 95% upper limit on the fraction of stars that host typical microlensing planets of 49%, which is consistent with previous studies. Based on this planet-free sample, we develop the methodology to statistically study the Galactic distribution of planets using microlensing parallax measurements. Under the assumption that the planet distributions are the same in the bulge as in the disk, we predict that ∼1/3 of all planet detections from the microlensing campaigns with Spitzer should be in the bulge. This prediction will be tested with a much larger sample, and deviations from it can be used to constrain the abundance of planets in the bulge relative to the disk.

  18. 75 FR 39093 - Proposed Confidentiality Determinations for Data Required Under the Mandatory Greenhouse Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... information that is sensitive or proprietary, such as detailed process designs or site plans. Because the... Inputs to Emission Equations X Calculation Methodology and Methodological Tier X Data Elements Reported...

  19. Are artificial opals non-close-packed fcc structures?

    NASA Astrophysics Data System (ADS)

    García-Santamaría, F.; Braun, P. V.

    2007-06-01

    The authors report a simple experimental method to accurately measure the volume fraction of artificial opals. The results are modeled using several methods, and they find that some of the most common yield very inaccurate results. Both finite size and substrate effects play an important role in calculations of the volume fraction. The experimental results show that the interstitial pore volume is 4%-15% larger than expected for close-packed structures. Consequently, calculations performed in previous work relating the amount of material synthesized in the opal interstices with the optical properties may need revision, especially in the case of high refractive index materials.

  20. Effects of Solid Solution Strengthening Elements Mo, Re, Ru, and W on Transition Temperatures in Nickel-Based Superalloys with High γ'-Volume Fraction: Comparison of Experiment and CALPHAD Calculations

    NASA Astrophysics Data System (ADS)

    Ritter, Nils C.; Sowa, Roman; Schauer, Jan C.; Gruber, Daniel; Goehler, Thomas; Rettig, Ralf; Povoden-Karadeniz, Erwin; Koerner, Carolin; Singer, Robert F.

    2018-06-01

    We prepared 41 different superalloy compositions by an arc melting, casting, and heat treatment process. Alloy solid solution strengthening elements were added in graded amounts, and we measured the solidus, liquidus, and γ'-solvus temperatures of the samples by DSC. The γ'-phase fraction increased as the W, Mo, and Re contents were increased, and W showed the most pronounced effect. Ru decreased the γ'-phase fraction. Melting temperatures (i.e., solidus and liquidus) were increased by addition of Re, W, and Ru (the effect increased in that order). Addition of Mo decreased the melting temperature. W was effective as a strengthening element because it acted as a solid solution strengthener and increased the fraction of fine γ'-precipitates, thus improving precipitation strengthening. Experimentally determined values were compared with calculated values based on the CALPHAD software tools Thermo-Calc (databases: TTNI8 and TCNI6) and MatCalc (database ME-NI). The ME-NI database, which was specially adapted to the present investigation, showed good agreement. TTNI8 also showed good results. The TCNI6 database is suitable for computational design of complex nickel-based superalloys. However, a large deviation remained between the experiment results and calculations based on this database. It also erroneously predicted γ'-phase separations and failed to describe the Ru-effect on transition temperatures.

  1. Tunneling time in space fractional quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad; Mandal, Bhabani Prasad

    2018-02-01

    We calculate the time taken by a wave packet to travel through a classically forbidden region of space in space fractional quantum mechanics. We obtain the close form expression of tunneling time from a rectangular barrier by stationary phase method. We show that tunneling time depends upon the width b of the barrier for b → ∞ and therefore Hartman effect doesn't exist in space fractional quantum mechanics. Interestingly we found that the tunneling time monotonically reduces with increasing b. The tunneling time is smaller in space fractional quantum mechanics as compared to the case of standard quantum mechanics. We recover the Hartman effect of standard quantum mechanics as a special case of space fractional quantum mechanics.

  2. Statistical mechanics based on fractional classical and quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korichi, Z.; Meftah, M. T., E-mail: mewalid@yahoo.com

    2014-03-15

    The purpose of this work is to study some problems in statistical mechanics based on the fractional classical and quantum mechanics. At first stage we have presented the thermodynamical properties of the classical ideal gas and the system of N classical oscillators. In both cases, the Hamiltonian contains fractional exponents of the phase space (position and momentum). At the second stage, in the context of the fractional quantum mechanics, we have calculated the thermodynamical properties for the black body radiation, studied the Bose-Einstein statistics with the related problem of the condensation and the Fermi-Dirac statistics.

  3. Phytochemistry of Cimicifugic Acids and Associated Bases in Cimicifuga racemosa Root Extracts

    PubMed Central

    GÖdecke, Tanja; Nikolic, Dejan; Lankin, David C.; Chen, Shao-Nong; Powell, Sharla L.; Dietz, Birgit; Bolton, Judy L.; Van Breemen, Richard B.; Farnsworth, Norman R.; Pauli, Guido F.

    2009-01-01

    Introduction Earlier studies reported serotonergic activity for cimicifugic acids (CA) isolated from Cimicifuga racemosa. The discovery of strongly basic alkaloids, cimipronidines, from the active extract partition and evaluation of previously employed work-up procedures has led to the hypothesis of strong acid/base association in the extract. Objective Re-isolation of the CAs was desired to permit further detailed studies. Based on the acid/base association hypothesis, a new separation scheme of the active partition was required, which separates acids from associated bases. Methodology A new 5-HT7 bioassay guided work-up procedure was developed that concentrates activity into one partition. The latter was subjected to a new 2-step centrifugal partitioning chromatography (CPC) method, which applies pH zone refinement gradient (pHZR CPC) to dissociate the acid/base complexes. The resulting CA fraction was subjected to a second CPC step. Fractions and compounds were monitored by 1H NMR using a structure based spin-pattern analysis facilitating dereplication of the known acids. Bioassay results were obtained for the pHZR CPC fractions and for purified CAs. Results A new CA was characterized. While none of the pure CAs was active, the serotonergic activity was concentrated in a single pHZR CPC fraction, which was subsequently shown to contain low levels of the potent 5-HT7 ligand, Nω–methylserotonin. Conclusion This study shows that CAs are not responsible for serotonergic activity in black cohosh. New phytochemical methodology (pHZR CPC) and a sensitive dereplication method (LC-MS) led to the identification of Nω–methylserotonin as serotonergic active principle. PMID:19140115

  4. The Momentum Distribution of Liquid ⁴He

    DOE PAGES

    Prisk, T. R.; Bryan, M. S.; Sokol, P. E.; ...

    2017-07-24

    We report a high-resolution neutron Compton scattering study of liquid ⁴He under milli-Kelvin temperature control. To interpret the scattering data, we performed Quantum Monte Carlo calculations of the atomic momentum distribution and final state effects for the conditions of temperature and density considered in the experiment. There is excellent agreement between the observed scattering and ab initio calculations of its lineshape at all temperatures. We also used model fit functions to obtain from the scattering data empirical estimates of the average atomic kinetic energy and Bose condensate fraction. These quantities are also in excellent agreement with ab initio calculations. Wemore » conclude that contemporary Quantum Monte Carlo methods can furnish accurate predictions for the properties of Bose liquids, including the condensate fraction, close to the superfluid transition temperature.« less

  5. A comparative review of nurse turnover rates and costs across countries.

    PubMed

    Duffield, Christine M; Roche, Michael A; Homer, Caroline; Buchan, James; Dimitrelis, Sofia

    2014-12-01

    To compare nurse turnover rates and costs from four studies in four countries (US, Canada, Australia, New Zealand) that have used the same costing methodology; the original Nursing Turnover Cost Calculation Methodology. Measuring and comparing the costs and rates of turnover is difficult because of differences in definitions and methodologies. Comparative review. Searches were carried out within CINAHL, Business Source Complete and Medline for studies that used the original Nursing Turnover Cost Calculation Methodology and reported on both costs and rates of nurse turnover, published from 2014 and prior. A comparative review of turnover data was conducted using four studies that employed the original Nursing Turnover Cost Calculation Methodology. Costing data items were converted to percentages, while total turnover costs were converted to US 2014 dollars and adjusted according to inflation rates, to permit cross-country comparisons. Despite using the same methodology, Australia reported significantly higher turnover costs ($48,790) due to higher termination (~50% of indirect costs) and temporary replacement costs (~90% of direct costs). Costs were almost 50% lower in the US ($20,561), Canada ($26,652) and New Zealand ($23,711). Turnover rates also varied significantly across countries with the highest rate reported in New Zealand (44·3%) followed by the US (26·8%), Canada (19·9%) and Australia (15·1%). A significant proportion of turnover costs are attributed to temporary replacement, highlighting the importance of nurse retention. The authors suggest a minimum dataset is also required to eliminate potential variability across countries, states, hospitals and departments. © 2014 John Wiley & Sons Ltd.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heaney, Mike

    Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introducedmore » and finally a case study will be presented to demonstrate this methodology.« less

  7. Toward a simple, repeatable, non-destructive approach to measuring stable-isotope ratios of water within tree stems

    NASA Astrophysics Data System (ADS)

    Raulerson, S.; Volkmann, T.; Pangle, L. A.

    2017-12-01

    Traditional methodologies for measuring ratios of stable isotopes within the xylem water of trees involve destructive coring of the stem. A recent approach involves permanently installed probes within the stem, and an on-site assembly of pumps, switching valves, gas lines, and climate-controlled structure for field deployment of a laser spectrometer. The former method limits the possible temporal resolution of sampling, and sample size, while the latter may not be feasible for many research groups. We present results from initial laboratory efforts towards developing a non-destructive, temporally-resolved technique for measuring stable isotope ratios within the xylem flow of trees. Researchers have used direct liquid-vapor equilibration as a method to measure isotope ratios of the water in soil pores. Typically, this is done by placing soil samples in a fixed container, and allowing the liquid water within the soil to come into isotopic equilibrium with the headspace of the container. Water can also be removed via cryogenic distillation or azeotropic distillation, with the resulting liquid tested for isotope ratios. Alternatively, the isotope ratios of the water vapor can be directly measured using a laser-based water vapor isotope analyzer. Well-established fractionation factors and the isotope ratios in the vapor phase are then used to calculate the isotope ratios in the liquid phase. We propose a setup which would install a single, removable chamber onto a tree, where vapor samples could non-destructively and repeatedly be taken. These vapor samples will be injected into a laser-based isotope analyzer by a recirculating gas conveyance system. A major part of what is presented here is in the procedure of taking vapor samples at 100% relative humidity, appropriately diluting them with completely dry N2 calibration gas, and injecting them into the gas conveyance system without inducing fractionation in the process. This methodology will be helpful in making temporally resolved measurements of the stable isotopes in xylem water, using a setup that can be easily repeated by other research groups. The method is anticipated to find broad application in ecohydrological analyses, and in tracer studies aimed at quantifying age distributions of soil water extracted by plant roots.

  8. ORGANIC COMPOUNDS IN SURFACE SEDIMENTS AND OYSTER TISSUES FROM THE CHESAPEAKE BAY. APPENDICES

    EPA Science Inventory

    Detailed in the first part of this report is a development and discussion of the methodology used to extract and analyze sediment and oyster tissue samples from Chesapeake Bay for organic compounds. The method includes extraction, fractionation, and subsequent analysis using glas...

  9. EVALUATING METRICS FOR GREEN CHEMISTRIES: INFORMATION AND CALCULATION NEEDS

    EPA Science Inventory

    Research within the U.S. EPA's National Risk Management Research Laboratory is developing a methodology for the evaluation of green chemistries. This methodology called GREENSCOPE (Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Ob...

  10. Highway User Benefit Analysis System Research Project #128

    DOT National Transportation Integrated Search

    2000-10-01

    In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...

  11. 40 CFR 63.824 - Standards: Publication rotogravure printing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....000 For the purposes of this calculation, the mass fraction of organic HAP present in the recovered volatile matter is assumed to be equal to the mass fraction of organic HAP present in the volatile matter... section: (i) Perform a liquid-liquid material balance for each month as follows: (A) Measure the mass of...

  12. 40 CFR 63.824 - Standards: Publication rotogravure printing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....000 For the purposes of this calculation, the mass fraction of organic HAP present in the recovered volatile matter is assumed to be equal to the mass fraction of organic HAP present in the volatile matter... section: (i) Perform a liquid-liquid material balance for each month as follows: (A) Measure the mass of...

  13. JONAH algorithms: C-2 the ratio option

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rego, J.

    1979-02-01

    Information concerning input is given first. Then formulas are given for calculation of atoms/millimeter, fissions, kiloton yield, R-value, atoms/fission, fissions/fission, bomb fraction, fissions/atoms, atoms, atoms/atoms, fissions/atoms, atom ratio, total atoms formed, and thermonuclear bomb fraction. Some of the terminology used is elucidated in an appendix. (RWR)

  14. The effect of clouds on the earth's radiation budget

    NASA Technical Reports Server (NTRS)

    Ziskin, Daniel; Strobel, Darrell F.

    1991-01-01

    The radiative fluxes from the Earth Radiation Budget Experiment (ERBE) and the cloud properties from the International Satellite Cloud Climatology Project (ISCCP) over Indonesia for the months of June and July of 1985 and 1986 were analyzed to determine the cloud sensitivity coefficients. The method involved a linear least squares regression between co-incident flux and cloud coverage measurements. The calculated slope is identified as the cloud sensitivity. It was found that the correlations between the total cloud fraction and radiation parameters were modest. However, correlations between cloud fraction and IR flux were improved by separating clouds by height. Likewise, correlations between the visible flux and cloud fractions were improved by distinguishing clouds based on optical depth. Calculating correlations between the net fluxes and either height or optical depth segregated cloud fractions were somewhat improved. When clouds were classified in terms of their height and optical depth, correlations among all the radiation components were improved. Mean cloud sensitivities based on the regression of radiative fluxes against height and optical depth separated cloud types are presented. Results are compared to a one-dimensional radiation model with a simple cloud parameterization scheme.

  15. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  16. Method for analyzing E x B probe spectra from Hall thruster plumes.

    PubMed

    Shastry, Rohit; Hofer, Richard R; Reid, Bryan M; Gallimore, Alec D

    2009-06-01

    Various methods for accurately determining ion species' current fractions using E x B probes in Hall thruster plumes are investigated. The effects of peak broadening and charge exchange on the calculated values of current fractions are quantified in order to determine the importance of accounting for them in the analysis. It is shown that both peak broadening and charge exchange have a significant effect on the calculated current fractions over a variety of operating conditions, especially at operating pressures exceeding 10(-5) torr. However, these effects can be accounted for using a simple approximation for the velocity distribution function and a one-dimensional charge exchange correction model. In order to keep plume attenuation from charge exchange below 30%, it is recommended that pz < or = 2, where p is the measured facility pressure in units of 10(-5) torr and z is the distance from the thruster exit plane to the probe inlet in meters. The spatial variation of the current fractions in the plume of a Hall thruster and the error induced from taking a single-point measurement are also briefly discussed.

  17. Potential utilization of the absolute point cumulative semivariogram technique for the evaluation of distribution coefficient.

    PubMed

    Külahci, Fatih; Sen, Zekâi

    2009-09-15

    The classical solid/liquid distribution coefficient, K(d), for radionuclides in water-sediment systems is dependent on many parameters such as flow, geology, pH, acidity, alkalinity, total hardness, radioactivity concentration, etc. in a region. Considerations of all these effects require a regional analysis with an effective methodology, which has been based on the concept of the cumulative semivariogram concept in this paper. Although classical K(d) calculations are punctual and cannot represent regional pattern, in this paper a regional calculation methodology is suggested through the use of Absolute Point Cumulative SemiVariogram (APCSV) technique. The application of the methodology is presented for (137)Cs and (90)Sr measurements at a set of points in Keban Dam reservoir, Turkey.

  18. How recalibration method, pricing, and coding affect DRG weights

    PubMed Central

    Carter, Grace M.; Rogowski, Jeannette A.

    1992-01-01

    We compared diagnosis-related group (DRG) weights calculated using the hospital-specific relative-value (HSR V) methodology with those calculated using the standard methodology for each year from 1985 through 1989 and analyzed differences between the two methods in detail for 1989. We provide evidence suggesting that classification error and subsidies of higher weighted cases by lower weighted cases caused compression in the weights used for payment as late as the fifth year of the prospective payment system. However, later weights calculated by the standard method are not compressed because a statistical correlation between high markups and high case-mix indexes offsets the cross-subsidization. HSR V weights from the same files are compressed because this methodology is more sensitive to cross-subsidies. However, both sets of weights produce equally good estimates of hospital-level costs net of those expenses that are paid by outlier payments. The greater compression of the HSR V weights is counterbalanced by the fact that more high-weight cases qualify as outliers. PMID:10127456

  19. Estimating radiotherapy demands in South East Asia countries in 2025 and 2035 using evidence-based optimal radiotherapy fractions.

    PubMed

    Yahya, Noorazrul; Roslan, Nurhaziqah

    2018-01-08

    As about 50% of cancer patients may require radiotherapy, the demand of radiotherapy as the main treatment to treat cancer is likely to rise due to rising cancer incidence. This study aims to quantify the radiotherapy demand in countries in Southeast Asia (SEA) in 2025 and 2035 using evidence-based optimal radiotherapy fractions. SEA country-specific cancer incidence by tumor site for 2015, 2025 and 2035 was extracted from the GLOBOCAN database. We utilized the optimal radiotherapy utilization rate model by Wong et al. (2016) to calculate the optimal number of fractions for all tumor sites in each SEA country. The available machines (LINAC & Co-60) were extracted from the IAEA's Directory of Radiotherapy Centres (DIRAC) from which the number of available fractions was calculated. The incidence of cancers in SEA countries are expected to be 1.1 mil cases (2025) and 1.4 mil (2035) compared to 0.9 mil (2015). The number of radiotherapy fractions needed in 2025 and 2035 are 11.1 and 14.1 mil, respectively, compared to 7.6 mil in 2015. In 2015, the radiotherapy fulfillment rate (RFR; required fractions/available fractions) varied between countries with Brunei, Singapore and Malaysia are highest (RFR > 1.0 - available fractions > required fractions), whereas Cambodia, Indonesia, Laos, Myanmar, Philippines, Timor-Leste and Vietnam have RFR < 0.5. RFR is correlated to GDP per capita (ρ = 0.73, P = 0.01). To allow RFR ≥1 in 2025 and 2035, another 866 and 1177 machines are required, respectively. The number are lower if longer running hours are implemented. With the optimal number of radiotherapy fractions, estimation for number of machines required can be obtained which will guide acquisition of machines in SEA countries. RFR is low with access varied based on the economic status. © 2018 John Wiley & Sons Australia, Ltd.

  20. Response functions for computing absorbed dose to skeletal tissues from photon irradiation—an update

    NASA Astrophysics Data System (ADS)

    Johnson, Perry B.; Bahadori, Amir A.; Eckerman, Keith F.; Lee, Choonsik; Bolch, Wesley E.

    2011-04-01

    A comprehensive set of photon fluence-to-dose response functions (DRFs) is presented for two radiosensitive skeletal tissues—active and total shallow marrow—within 15 and 32 bone sites, respectively, of the ICRP reference adult male. The functions were developed using fractional skeletal masses and associated electron-absorbed fractions as reported for the UF hybrid adult male phantom, which in turn is based upon micro-CT images of trabecular spongiosa taken from a 40 year male cadaver. The new DRFs expand upon both the original set of seven functions produced in 1985, and a 2007 update calculated under the assumption of secondary electron escape from spongiosa. In this study, it is assumed that photon irradiation of the skeleton will yield charged particle equilibrium across all spongiosa regions at energies exceeding 200 keV. Kerma coefficients for active marrow, inactive marrow, trabecular bone and spongiosa at higher energies are calculated using the DRF algorithm setting the electron-absorbed fraction for self-irradiation to unity. By comparing kerma coefficients and DRF functions, dose enhancement factors and mass energy-absorption coefficient (MEAC) ratios for active marrow to spongiosa were derived. These MEAC ratios compared well with those provided by the NIST Physical Reference Data Library (mean difference of 0.8%), and the dose enhancement factors for active marrow compared favorably with values calculated in the well-known study published by King and Spiers (1985 Br. J. Radiol. 58 345-56) (mean absolute difference of 1.9 percentage points). Additionally, dose enhancement factors for active marrow were shown to correlate well with the shallow marrow volume fraction (R2 = 0.91). Dose enhancement factors for the total shallow marrow were also calculated for 32 bone sites representing the first such derivation for this target tissue.

  1. Fundamental studies on kinetic isotope effect (KIE) of hydrogen isotope fractionation in natural gas systems

    USGS Publications Warehouse

    Ni, Y.; Ma, Q.; Ellis, G.S.; Dai, J.; Katz, B.; Zhang, S.; Tang, Y.

    2011-01-01

    Based on quantum chemistry calculations for normal octane homolytic cracking, a kinetic hydrogen isotope fractionation model for methane, ethane, and propane formation is proposed. The activation energy differences between D-substitute and non-substituted methane, ethane, and propane are 318.6, 281.7, and 280.2cal/mol, respectively. In order to determine the effect of the entropy contribution for hydrogen isotopic substitution, a transition state for ethane bond rupture was determined based on density function theory (DFT) calculations. The kinetic isotope effect (KIE) associated with bond rupture in D and H substituted ethane results in a frequency factor ratio of 1.07. Based on the proposed mathematical model of hydrogen isotope fractionation, one can potentially quantify natural gas thermal maturity from measured hydrogen isotope values. Calculated gas maturity values determined by the proposed mathematical model using ??D values in ethane from several basins in the world are in close agreement with similar predictions based on the ??13C composition of ethane. However, gas maturity values calculated from field data of methane and propane using both hydrogen and carbon kinetic isotopic models do not agree as closely. It is possible that ??D values in methane may be affected by microbial mixing and that propane values might be more susceptible to hydrogen exchange with water or to analytical errors. Although the model used in this study is quite preliminary, the results demonstrate that kinetic isotope fractionation effects in hydrogen may be useful in quantitative models of natural gas generation, and that ??D values in ethane might be more suitable for modeling than comparable values in methane and propane. ?? 2011 Elsevier Ltd.

  2. Response functions for computing absorbed dose to skeletal tissues from photon irradiation--an update.

    PubMed

    Johnson, Perry B; Bahadori, Amir A; Eckerman, Keith F; Lee, Choonsik; Bolch, Wesley E

    2011-04-21

    A comprehensive set of photon fluence-to-dose response functions (DRFs) is presented for two radiosensitive skeletal tissues-active and total shallow marrow-within 15 and 32 bone sites, respectively, of the ICRP reference adult male. The functions were developed using fractional skeletal masses and associated electron-absorbed fractions as reported for the UF hybrid adult male phantom, which in turn is based upon micro-CT images of trabecular spongiosa taken from a 40 year male cadaver. The new DRFs expand upon both the original set of seven functions produced in 1985, and a 2007 update calculated under the assumption of secondary electron escape from spongiosa. In this study, it is assumed that photon irradiation of the skeleton will yield charged particle equilibrium across all spongiosa regions at energies exceeding 200 keV. Kerma coefficients for active marrow, inactive marrow, trabecular bone and spongiosa at higher energies are calculated using the DRF algorithm setting the electron-absorbed fraction for self-irradiation to unity. By comparing kerma coefficients and DRF functions, dose enhancement factors and mass energy-absorption coefficient (MEAC) ratios for active marrow to spongiosa were derived. These MEAC ratios compared well with those provided by the NIST Physical Reference Data Library (mean difference of 0.8%), and the dose enhancement factors for active marrow compared favorably with values calculated in the well-known study published by King and Spiers (1985 Br. J. Radiol. 58 345-56) (mean absolute difference of 1.9 percentage points). Additionally, dose enhancement factors for active marrow were shown to correlate well with the shallow marrow volume fraction (R(2) = 0.91). Dose enhancement factors for the total shallow marrow were also calculated for 32 bone sites representing the first such derivation for this target tissue.

  3. RESPONSE FUNCTIONS FOR COMPUTING ABSORBED DOSE TO SKELETAL TISSUES FROM PHOTON IRRADIATION – AN UPDATE

    PubMed Central

    Johnson, Perry; Bahadori, Amir; Eckerman, Keith; Lee, Choonsik; Bolch, Wesley E.

    2014-01-01

    A comprehensive set of photon fluence-to-dose response functions (DRFs) are presented for two radiosensitive skeletal tissues – active and total shallow marrow – within 15 and 32 bones sites, respectively, of the ICRP reference adult male. The functions were developed using fractional skeletal masses and associated electron absorbed fractions as reported for the UF hybrid adult male phantom, which in turn is based upon microCT images of trabecular spongiosa taken from a 40-year male cadaver. The new DRFs expand upon both the original set of seven functions produced in 1985, as well as a 2007 update calculated under the assumption of secondary electron escape from spongiosa. In the present study, it is assumed that photon irradiation of the skeleton will yield charged particle equilibrium across all spongiosa regions at energies exceeding 200 keV. Kerma factors for active marrow, inactive marrow, trabecular bone, and spongiosa at higher energies are calculated using the DRF algorithm setting the electron absorbed fraction for self-irradiation to unity. By comparing kerma factors and DRF functions, dose enhancement factors and mass energy-absorption coefficient (MEAC) ratios for active marrow to spongiosa were derived. These MEAC ratios compared well with those provided by the NIST Physical Reference Data Library (mean difference of 0.8%), and the dose enhancement factors for active marrow compared favorably with values calculated in the well-known study published by King and Spiers (1985) (mean absolute difference of 1.9 percentage points). Additionally, dose enhancement factors for active marrow were shown to correlate well with the shallow marrow volume fraction (R2 = 0.91). Dose enhancement factors for the total shallow marrow were also calculated for 32 bone sites PMID:21427484

  4. A biomimetic approach to the detection and identification of estrogen receptor agonists in surface waters using semipermeable membrane devices (SPMDs) and bioassay-directed chemical analysis.

    PubMed

    Rastall, Andrew C; Getting, Dominic; Goddard, Jon; Roberts, David R; Erdinger, Lothar

    2006-07-01

    Some anthropogenic pollutants posses the capacity to disrupt endogenous control of developmental and reproductive processes in aquatic biota by activating estrogen receptors. Many anthropogenic estrogen receptor agonists (ERAs) are hydrophobic and will therefore readily partition into the abiotic organic carbon phases present in natural waters. This partitioning process effectively reduces the proportion of ERAs readily available for bioconcentration by aquatic biota. Results from some studies have suggested that for many aquatic species, bioconcentration of the freely-dissolved fraction may be the principal route of uptake for hydrophobic pollutants with logarithm n-octanol/water partition coefficient (log Kow) values less than approximately 6.0, which includes the majority of known anthropogenic ERAs. The detection and identification of freely-dissolved readily bioconcentratable ERAs is therefore an important aspect of exposure and risk assessment. However, most studies use conventional techniques to sample total ERA concentrations and in doing so frequently fail to account for bioconcentration of the freely-dissolved fraction. The aim of the current study was to couple the biomimetic sampling properties of semipermeable membrane devices (SPMDs) to a bioassay-directed chemical analysis (BDCA) scheme for the detection and identification of readily bioconcentratable ERAs in surface waters. SPMDs were constructed and deployed at a number of sites in Germany and the UK. Following the dialytic recovery of target compounds and size exclusion chromatographic cleanup, SPMD samples were fractionated using a reverse-phase HPLC method calibrated to provide an estimation of target analyte log Kow. A portion of each HPLC fraction was then subjected to the yeast estrogen screen (YES) to determine estrogenic potential. Results were plotted in the form of 'estrograms' which displayed profiles of estrogenic potential as a function of HPLC retention time (i.e. hydrophobicity) for each of the samples. Where significant activity was elicited in the YES, the remaining portion of the respective active fraction was subjected to GC-MS analysis in an attempt to identify the ERAs present. Estrograms from each of the field samples showed that readily bioconcentratable ERAs were present at each of the sampling sites. Estimated log Kow values for the various active fractions ranged from 1.92 to 8.63. For some samples, estrogenic potential was associated with a relatively narrow range of log Kow values whilst in others estrogenic potential was more widely distributed across the respective estrograms. ERAs identified in active fractions included some benzophenones, various nonylphenol isomers, benzyl butyl phthalate, dehydroabietic acid, sitosterol, 3-(4-methylbenzylidine)camphor (4-MBC) and 6-acetyl-1,1,2,4,4,7-hexamethyltetralin (AHTN). Other tentatively identified compounds which may have contributed to the observed YES activity included various polycyclic aromatic hydrocarbons (PAHs) and their alkylated derivatives, methylated benzylphenols, various alkyl-phenols and dialkylphenols. However, potential ERAs present in some active fractions remain unidentified. Our results show that SPMD-YES-based BDCA can be used to detect and identify readily bioconcentratable ERAs in surface waters. As such, this biomimetic approach can be employed as an alternative to conventional methodologies to provide investigators with a more environmentally relevant insight into the distribution and identity of ERAs in surface waters. The use of alternative bioassays also has the potential to expand SPMD-based BDCA to include a wide range of toxicological endpoints. Improvements to the analytical methodology used to identify ERAs or other target compounds in active fractions in the current study could greatly enhance the applicability of the methodology to risk assessment and monitoring programmes.

  5. Combination of COFRADIC and high temperature-extended column length conventional liquid chromatography: a very efficient way to tackle complex protein samples, such as serum.

    PubMed

    Sandra, Koen; Verleysen, Katleen; Labeur, Christine; Vanneste, Lies; D'Hondt, Filip; Thomas, Grégoire; Kas, Koen; Gevaert, Kris; Vandekerckhove, Joël; Sandra, Pat

    2007-03-01

    The previously reported COmbined FRActional DIagonal Chromatography (COFRA-DIC) methodology, in which a subset of peptides representative for their parent proteins are sorted, is particularly powerful for whole proteome analysis. This peptide-centric technology is built around diagonal chromatography, where peptide separations are crucial. This paper presents high efficiency peptide separations, in which four 250 x 2.1 mm, 5 microm Zorbax 300SB-C18 columns (total length 1 m) were coupled at operating temperatures of 60'C using a dedicated LC oven and conventional LC equipment. The high efficiency separations were combined with the COFRADIC procedure. This extremely powerful combination resulted, for the analysis of serum, in an increase in the uniquely identified peptide sequences by a factor of 2.6, compared to the COFRADIC procedure on a 25 cm column. This is a reflection of the increased peak capacity obtained on the 1 m column, which was calculated to be a factor 2.7 higher than on the 25 cm column. Besides more efficient sorting, less ion suppression was noticed.

  6. Comparison of ash behavior of different fuels in fluidised bed combustion using advanced fuel analysis and global equilibrium calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zevenhoven-Onderwater, M.; Blomquist, J.P.; Skrifvars, B.J.

    1999-07-01

    The behavior of different ashes is predicted by means of a combination of an advanced fuel analysis and global equilibrium calculations. In order to cover a broad spectrum of fuels a coal, a peat, a forest residue and Salix (i.e. willow) are studied. The latter was taken with and without soil contamination, i.e. with a high and low content of silica , respectively. It is shown that mineral matter in fossil and biomass fuels can be present in the matrix of the fuel itself or as included minerals. Using an advanced fuel analysis, i.e. a fractionation method, this mineral contentmore » can be divided into four fractions. The first fraction mainly contains those metal ions, that can be leached out of the fuel by water and mainly contains alkali sulfates, carbonates and chlorides. The second fraction mainly consists of those ions leached out by ammonium acetate and covers those ions, that are connected to the organic matrix. The third fraction contains the metals leached out by hydrochloric acid and contains earth alkali carbonates and sulfates as well as pyrites. The rest fraction contains those minerals, that are not leached out by any of the above mentioned solvents, such as silicates. A global equilibrium analysis is used to predict the thermal and chemical behavior of the combined first and second fractions and of the combined third and rest fractions under pressurized and/or atmospheric combustion conditions. Results of both the fuel analysis and the global equilibrium analysis are discussed and practical implications for combustion processes are pointed out.« less

  7. Characterisation of the biochemical methane potential (BMP) of individual material fractions in Danish source-separated organic household waste.

    PubMed

    Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte

    2016-04-01

    This study is dedicated to characterising the chemical composition and biochemical methane potential (BMP) of individual material fractions in untreated Danish source-separated organic household waste (SSOHW). First, data on SSOHW in different countries, available in the literature, were evaluated and then, secondly, laboratory analyses for eight organic material fractions comprising Danish SSOHW were conducted. No data were found in the literature that fully covered the objectives of the present study. Based on laboratory analyses, all fractions were assigned according to their specific properties in relation to BMP, protein content, lipids, lignocellulose biofibres and easily degradable carbohydrates (carbohydrates other than lignocellulose biofibres). The three components in lignocellulose biofibres, i.e. lignin, cellulose and hemicellulose, were differentiated, and theoretical BMP (TBMP) and material degradability (BMP from laboratory incubation tests divided by TBMP) were expressed. Moreover, the degradability of lignocellulose biofibres (the share of volatile lignocellulose biofibre solids degraded in laboratory incubation tests) was calculated. Finally, BMP for average SSOHW composition in Denmark (untreated) was calculated, and the BMP contribution of the individual material fractions was then evaluated. Material fractions of the two general waste types, defined as "food waste" and "fibre-rich waste," were found to be anaerobically degradable with considerable BMP. Material degradability of material fractions such as vegetation waste, moulded fibres, animal straw, dirty paper and dirty cardboard, however, was constrained by lignin content. BMP for overall SSOHW (untreated) was 404 mL CH4 per g VS, which might increase if the relative content of material fractions, such as animal and vegetable food waste, kitchen tissue and dirty paper in the waste, becomes larger. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Efficient free energy calculations of quantum systems through computer simulations

    NASA Astrophysics Data System (ADS)

    Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo

    2009-03-01

    In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)

  9. Isotopic abundances of Hg in mercury stars inferred from the Hg II line at 3984 A

    NASA Technical Reports Server (NTRS)

    White, R. E.; Vaughan, A. H., Jr.; Preston, G. W.; Swings, J. P.

    1976-01-01

    Wavelengths of the Hg II absorption feature at 3984 A in 30 Hg stars are distributed uniformly from the value for the terrestrial mix to a value that corresponds to nearly pure Hg-204. The wavelengths are correlated loosely with effective temperatures inferred from Q(UBV). Relative isotopic abundances derived from partially resolved profiles of the 3984-A line in iota CrB, chi Lup, and HR 4072 suggest that mass-dependent fractionation has occurred in all three stars. It is supposed that such fractionation occurs in all Hg stars, and a scheme whereby isotopic compositions can be inferred from a comparison of stellar wavelengths and equivalent widths with those calculated for a family of fractionated isotopic mixes. Theoretical profiles calculated for the derived isotopic composition agree well with high-resolution interferometric profiles obtained for three of the stars.

  10. Fractional time-dependent apparent viscosity model for semisolid foodstuffs

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Chen, Wen; Sun, HongGuang

    2017-10-01

    The difficulty in the description of thixotropic behaviors in semisolid foodstuffs is the time dependent nature of apparent viscosity under constant shear rate. In this study, we propose a novel theoretical model via fractional derivative to address the high demand by industries. The present model adopts the critical parameter of fractional derivative order α to describe the corresponding time-dependent thixotropic behavior. More interestingly, the parameter α provides a quantitative insight into discriminating foodstuffs. With the re-exploration of three groups of experimental data (tehineh, balangu, and natillas), the proposed methodology is validated in good applicability and efficiency. The results show that the present fractional apparent viscosity model performs successfully for tested foodstuffs in the shear rate range of 50-150 s^{ - 1}. The fractional order α decreases with the increase of temperature at low temperature, below 50 °C, but increases with growing shear rate. While the ideal initial viscosity k decreases with the increase of temperature, shear rate, and ingredient content. It is observed that the magnitude of α is capable of characterizing the thixotropy of semisolid foodstuffs.

  11. Purification and fractionation of membranes for proteomic analyses.

    PubMed

    Marmagne, Anne; Salvi, Daniel; Rolland, Norbert; Ephritikhine, Geneviève; Joyard, Jacques; Barbier-Brygoo, Hélène

    2006-01-01

    Proteomics is a very powerful approach to link the information contained in sequenced genomes, such as Arabidopsis, to the functional knowledge provided by studies of plant cell compartments. However, membrane proteomics remains a challenge. One way to bring into view the complex mixture of proteins present in a membrane is to develop proteomic analyses based on (1) the use of highly purified membrane fractions and (2) fractionation of membrane proteins to retrieve as many proteins as possible (from the most to the less hydrophobic ones). To illustrate such strategies, we choose two types of membranes, the plasma membrane and the chloroplast envelope membranes. Both types of membranes can be prepared in a reasonable degree of purity from different types of tissues: the plasma membrane from cultured cells and the chloroplast envelope membrane from whole plants. This article is restricted to the description of methods for the preparation of highly purified and characterized plant membrane fractions and the subsequent fractionation of these membrane proteins according to simple physicochemical criteria (i.e., chloroform/methanol extraction, alkaline or saline treatments) for further analyses using modern proteomic methodologies.

  12. Chaotic vibrations of the duffing system with fractional damping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syta, Arkadiusz; Litak, Grzegorz; Lenci, Stefano

    2014-03-15

    We examined the Duffing system with a fractional damping term. Calculating the basins of attraction, we demonstrate a broad spectrum of non-linear behaviour connected with sensitivity to the initial conditions and chaos. To quantify dynamical response of the system, we propose the statistical 0-1 test as well as the maximal Lyapunov exponent; the application of the latter encounter a few difficulties because of the memory effect due to the fractional derivative. The results are confirmed by bifurcation diagrams, phase portraits, and Poincaré sections.

  13. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less

  14. The Long Exercise Test in Periodic Paralysis: A Bayesian Analysis.

    PubMed

    Simmons, Daniel B; Lanning, Julie; Cleland, James C; Puwanant, Araya; Twydell, Paul T; Griggs, Robert C; Tawil, Rabi; Logigian, Eric L

    2018-05-12

    The long exercise test (LET) is used to assess the diagnosis of periodic paralysis (PP), but LET methodology and normal "cut-off" values vary. To determine optimal LET methodology and cut-offs, we reviewed LET data (abductor digiti minimi (ADM) motor response amplitude, area) from 55 PP patients (32 genetically definite) and 125 controls. Receiver operating characteristic (ROC) curves were constructed and area-under-the-curve (AUC) calculated to compare 1) peak-to-nadir versus baseline-to-nadir methodologies, and 2) amplitude versus area decrements. Using Bayesian principles, optimal "cut-off" decrements that achieved 95% post-test probability of PP were calculated for various pre-test probabilities (PreTPs). AUC was highest for peak-to-nadir methodology and equal for amplitude and area decrements. For PreTP ≤50%, optimal decrement cut-offs (peak-to-nadir) were >40% (amplitude) or >50% (area). For confirmation of PP, our data endorse the diagnostic utility of peak-to-nadir LET methodology using 40% amplitude or 50% area decrement cut-offs for PreTPs ≤50%. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  15. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  16. Potential of Extracted Locusta Migratoria Protein Fractions as Value-Added Ingredients.

    PubMed

    Clarkson, Claudia; Mirosa, Miranda; Birch, John

    2018-02-09

    Although locusts can be sustainably produced and are nutrient rich, the thought of eating them can be hard to swallow for many consumers. This paper aims to investigate the nutritional composition of Locusta migratoria , including the properties of extracted locust protein, contributing to limited literature and product development opportunities for industry. Locusts sourced from Dunedin, New Zealand, contained a high amount of protein (50.79% dry weight) and fat (34.93%), which contained high amounts of omega-3 (15.64%), creating a desirably low omega-3/omega-6 ratio of 0.57. Three protein fractions including; insoluble locust fraction, soluble locust fraction, and a supernatant fraction were recovered following alkali isoelectric precipitation methodology. Initially, proteins were solubilised at pH 10 then precipitated out at the isoelectric point (pH 4). All fractions had significantly higher protein contents compared with the whole locust. The insoluble protein fraction represented 37.76% of the dry weight of protein recovered and was much lighter in colour and greener compared to other fractions. It also had the highest water and oil holding capacity of 5.17 mL/g and 7.31 mL/g, possibly due to larger particle size. The high supernatant yield (56.60%) and low soluble protein yield (9.83%) was unexpected and could be a result of experimental pH conditions chosen.

  17. Potential of Extracted Locusta Migratoria Protein Fractions as Value-Added Ingredients

    PubMed Central

    Birch, John

    2018-01-01

    Although locusts can be sustainably produced and are nutrient rich, the thought of eating them can be hard to swallow for many consumers. This paper aims to investigate the nutritional composition of Locusta migratoria, including the properties of extracted locust protein, contributing to limited literature and product development opportunities for industry. Locusts sourced from Dunedin, New Zealand, contained a high amount of protein (50.79% dry weight) and fat (34.93%), which contained high amounts of omega-3 (15.64%), creating a desirably low omega-3/omega-6 ratio of 0.57. Three protein fractions including; insoluble locust fraction, soluble locust fraction, and a supernatant fraction were recovered following alkali isoelectric precipitation methodology. Initially, proteins were solubilised at pH 10 then precipitated out at the isoelectric point (pH 4). All fractions had significantly higher protein contents compared with the whole locust. The insoluble protein fraction represented 37.76% of the dry weight of protein recovered and was much lighter in colour and greener compared to other fractions. It also had the highest water and oil holding capacity of 5.17 mL/g and 7.31 mL/g, possibly due to larger particle size. The high supernatant yield (56.60%) and low soluble protein yield (9.83%) was unexpected and could be a result of experimental pH conditions chosen. PMID:29425143

  18. CASCADE IMPACTOR DATA REDUCTION WITH SR-52 AND TI-59 PROGRAMMABLE CALCULATORS

    EPA Science Inventory

    The report provides useful tools for obtaining particle size distributions and graded penetration data from cascade impactor measurements. The programs calculate impactor aerodynamic cut points, total mass collected by the impactor, cumulative mass fraction less than for each sta...

  19. [Practice evolution of hypofractionation in breast radiation therapy and medical impact].

    PubMed

    Dupin, C; Vilotte, F; Lagarde, P; Petit, A; Breton-Callu, C

    2016-06-01

    Whole breast irradiation after conservative surgery is the standard treatment for invasive breast cancer. Randomized studies indicate that hypofractionation can be equivalent for selected patients. This study focuses on fractionation practice evolution in a single centre, and analyses the economic impact of practice modification. All prescriptions for invasive breast cancer between January 2010 and June 2014 were analyzed. Female patients 60 years or older, pN0 were considered for the economic study. Patients included in clinical trials or patient with high-grade tumours were excluded from the hypofractionation practice study, because physician could not choose fractionation. We used data from the Medical public health system to calculate cost per fraction and transportation cost. Two thousand thirty one patients were treated; 399 were eligible for the economic study (20%) and 282 for the practice study (14%). Treatment with 25 fractions decreased from 90% to 16% in the first half of 2014. Meanwhile, treatment with 15 or 16 fractions increased from 6% in 2010 to 68% in the first half of 2014. Hypofractionated treatment proportion was 100% with 42.5Gy in 16 fractions in 2010 and 100% 40Gy in 15 fractions in 2014, according to long-term follow-up publication of START trials. Treatment with five fractions remained stable around 7% (4 to 16%), reserved for patients over 80 years (P<0.0001). Based on data from 3451 fractions in 2013, transport cost was calculated at 62 € per fraction, in addition to a 170.77 € reimbursement per fraction, giving a cost per fraction of 232.77 €. Practice change led to an increase of hypofractionation in recent years. Hypofractionation may be currently prescribed and may concern 20% of patients. This practice evolution is beneficial for patients and the public health system. Copyright © 2016 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  20. A dual mode breath sampler for the collection of the end-tidal and dead space fractions.

    PubMed

    Salvo, P; Ferrari, C; Persia, R; Ghimenti, S; Lomonaco, T; Bellagambi, F; Di Francesco, F

    2015-06-01

    This work presents a breath sampler prototype automatically collecting end-tidal (single and multiple breaths) or dead space air fractions (multiple breaths). This result is achieved by real time measurements of the CO2 partial pressure and airflow during the expiratory and inspiratory phases. Suitable algorithms, used to control a solenoid valve, guarantee that a Nalophan(®) bag is filled with the selected breath fraction even if the subject under test hyperventilates. The breath sampler has low pressure drop (<0.5 kPa) and uses inert or disposable components to avoid bacteriological risk for the patients and contamination of the breath samples. A fully customisable software interface allows a real time control of the hardware and software status. The performances of the breath sampler were evaluated by comparing (a) the CO2 partial pressure calculated during the sampling with the CO2 pressure measured off-line within the Nalophan(®) bag; (b) the concentrations of four selected volatile organic compounds in dead space, end-tidal and mixed breath fractions. Results showed negligible deviations between calculated and off-line CO2 pressure values and the distributions of the selected compounds into dead space, end-tidal and mixed breath fractions were in agreement with their chemical-physical properties. Copyright © 2015. Published by Elsevier Ltd.

  1. Development of performance measurement for freight transportation.

    DOT National Transportation Integrated Search

    2014-09-01

    In this project, the researchers built a set of performance measures that are unified, user-oriented, scalable, systematic, effective, and : calculable for intermodal freight management and developed methodologies to calculate and use the measures. :...

  2. On extending Kohn-Sham density functionals to systems with fractional number of electrons.

    PubMed

    Li, Chen; Lu, Jianfeng; Yang, Weitao

    2017-06-07

    We analyze four ways of formulating the Kohn-Sham (KS) density functionals with a fractional number of electrons, through extending the constrained search space from the Kohn-Sham and the generalized Kohn-Sham (GKS) non-interacting v-representable density domain for integer systems to four different sets of densities for fractional systems. In particular, these density sets are (I) ensemble interacting N-representable densities, (II) ensemble non-interacting N-representable densities, (III) non-interacting densities by the Janak construction, and (IV) non-interacting densities whose composing orbitals satisfy the Aufbau occupation principle. By proving the equivalence of the underlying first order reduced density matrices associated with these densities, we show that sets (I), (II), and (III) are equivalent, and all reduce to the Janak construction. Moreover, for functionals with the ensemble v-representable assumption at the minimizer, (III) reduces to (IV) and thus justifies the previous use of the Aufbau protocol within the (G)KS framework in the study of the ground state of fractional electron systems, as defined in the grand canonical ensemble at zero temperature. By further analyzing the Aufbau solution for different density functional approximations (DFAs) in the (G)KS scheme, we rigorously prove that there can be one and only one fractional occupation for the Hartree Fock functional, while there can be multiple fractional occupations for general DFAs in the presence of degeneracy. This has been confirmed by numerical calculations using the local density approximation as a representative of general DFAs. This work thus clarifies important issues on density functional theory calculations for fractional electron systems.

  3. USE OF BIOASSAY-DIRECTED CHEMICAL ANALYSIS FOR IDENTIFYING MUTAGENIC COMPOUNDS IN URBAN AIR AND COMBUSTION EMISSIONS

    EPA Science Inventory

    Bioassay-directed chemical analysis fractionation has been used for 30 years to identify mutagenic classes of compounds in complex mixtures. Most studies have used the Salmonella (Ames) mutagenicity assay, and we have recently applied this methodology to two standard reference sa...

  4. PERFORMANCE AND SENSITIVITY ANALYSIS OF THE USEPA WINS FRACTIONATOR FOR THE PM 2.5 FEDERAL REFERENCE METHOD

    EPA Science Inventory

    In response to growing health concerns related to atmospheric fine particles, EPA promulgated in 1997 a new particulate matter standard accompanied by new sampling methodology. Based on a review of pertinent literature, a new metric (PM;,) was adopted and its measurement method...

  5. A PILOT STUDY OF THE INFLUENCE OF RESIDENTIAL HAC DUTY CYCLE ON INDOOR AIR QUALITY

    EPA Science Inventory

    A simple methodology was developed to collect measurements of duty cycle, the fraction of time the heating and air conditioning (HAC) system was operating inside residences. The primary purpose of the measurements was to assess whether the HAC duty cycle was related to reducti...

  6. MULTI-SITE PERFORMANCE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  7. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  8. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  9. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001

    Treesearch

    L. S. Heath; R. A. Birdsey; D. W. Williams

    2002-01-01

    The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...

  11. Background for Joint Systems Aspects of AIR 6000

    DTIC Science & Technology

    2000-04-01

    Checkland’s Soft Systems Methodology [7, 8,9]. The analytical techniques that are proposed for joint systems work are based on calculating probability...Supporting Global Interests 21 DSTO-CR-0155 SLMP Structural Life Management Plan SOW Stand-Off Weapon SSM Soft Systems Methodology UAV Uninhabited Aerial... Systems Methodology in Action, John Wiley & Sons, Chichester, 1990. [101 Pearl, Judea, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible

  12. 40 CFR 98.273 - Calculating GHG emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the Tier 1 methodology for stationary combustion sources in § 98.33(a)(1). (2) Calculate fossil fuel-based...

  13. Risk-Based High-Throughput Chemical Screening and Prioritization using Exposure Models and in Vitro Bioactivity Assays.

    PubMed

    Shin, Hyeong-Moo; Ernstoff, Alexi; Arnot, Jon A; Wetmore, Barbara A; Csiszar, Susan A; Fantke, Peter; Zhang, Xianming; McKone, Thomas E; Jolliet, Olivier; Bennett, Deborah H

    2015-06-02

    We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate daily intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.

  14. Trends in long-period seismicity related to magmatic fluid compositions

    USGS Publications Warehouse

    Morrissey, M.M.; Chouet, B.A.

    2001-01-01

    Sound speeds and densities are calculated for three different types of fluids: gas-gas mixture; ash-gas mixture; and bubbly liquid. These fluid properties are used to calculate the impedance contrast (Z) and crack stiffness (C) in the fluid-driven crack model (Chouet: J. Geophys. Res., 91 (1986) 13,967; 101 (1988) 4375; A seismic model for the source of long-period events and harmonic tremor. In: Gasparini, P., Scarpa, R., Aki, K. (Eds.), Volcanic Seismology, IAVCEI Proceedings in Volcanology, Springer, Berlin, 3133). The fluid-driven crack model describes the far-field spectra of long-period (LP) events as modes of resonance of the crack. Results from our calculations demonstrate that ash-laden gas mixtures have fluid to solid density ratios comparable to, and fluid to solid velocity ratios lower than bubbly liquids (gas-volume fractions 20% gas-volume fraction yields values of Q-1r similar to those for a rectangular crack. As with gas-gas and ash-gas mixtures, an increase in mass fraction narrows the bandwidth of the dominant mode and shifts the spectra to lower frequencies. Including energy losses due to dissipative processes in a bubbly liquid increases attenuation. Attenuation may also be higher in ash-gas mixtures and foams if the effects of momentum and mass transfer between the phases were considered in the calculations. ?? 2001 Elsevier Science B. V. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Hyeong -Moo; Ernstoff, Alexi; Arnot, Jon A.

    We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate dailymore » intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.« less

  16. The Effect of Size Fraction in Analyses of Benthic Foraminifera Assemblages: A Case Study Comparing Assemblages from the >125 μm and >150 μm Size Fractions

    NASA Astrophysics Data System (ADS)

    Weinkauf, Manuel F. G.; Milker, Yvonne

    2018-05-01

    Benthic Foraminifera assemblages are employed for past environmental reconstructions, as well as for biomonitoring studies in recent environments. Despite their established status for such applications, and existing protocols for sample treatment, not all studies using benthic Foraminifera employ the same methodology. For instance, there is no broad practical consensus whether to use the >125 µm or >150 µm size fraction for benthic foraminiferal assemblage analyses. Here, we use early Pleistocene material from the Pefka E section on the Island of Rhodes (Greece), which has been counted in both size fractions, to investigate whether a 25 µm difference in the counted fraction is already sufficient to have an impact on ecological studies. We analysed the influence of the difference in size fraction on studies of biodiversity as well as multivariate assemblage analyses of the sample material. We found that for both types of studies, the general trends remain the same regardless of the chosen size fraction, but in detail significant differences emerge which are not consistently distributed between samples. Studies which require a high degree of precision can thus not compare results from analyses that used different size fractions, and the inconsistent distribution of differences makes it impossible to develop corrections for this issue. We therefore advocate the consistent use of the >125 µm size fraction for benthic foraminiferal studies in the future.

  17. Methodology for extracting local constants from petroleum cracking flows

    DOEpatents

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  18. Abort Trigger False Positive and False Negative Analysis Methodology for Threshold-Based Abort Detection

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon

    2015-01-01

    This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.

  19. Adaptability of laser diffraction measurement technique in soil physics methodology

    NASA Astrophysics Data System (ADS)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters (e.g. pH(H2O), organic carbon and calcium carbonate content).

  20. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    PubMed

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Dynamics analysis of fractional order Yu-Wang system

    NASA Astrophysics Data System (ADS)

    Bhalekar, Sachin

    2013-10-01

    Fractional order version of a dynamical system introduced by Yu and Wang (Engineering, Technology & Applied Science Research, 2, (2012) 209-215) is discussed in this article. The basic dynamical properties of the system are studied. Minimum effective dimension 0.942329 for the existence of chaos in the proposed system is obtained using the analytical result. For chaos detection, we have calculated maximum Lyapunov exponents for various values of fractional order. Feedback control method is then used to control chaos in the system. Further, the system is synchronized with itself and with fractional order financial system using active control technique. Modified Adams-Bashforth-Moulton algorithm is used for numerical simulations.

  2. Dark Soliton Solutions of Space-Time Fractional Sharma-Tasso-Olver and Potential Kadomtsev-Petviashvili Equations

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan; Korkmaz, Alper; Bekir, Ahmet

    2017-02-01

    Dark soliton solutions for space-time fractional Sharma-Tasso-Olver and space-time fractional potential Kadomtsev-Petviashvili equations are determined by using the properties of modified Riemann-Liouville derivative and fractional complex transform. After reducing both equations to nonlinear ODEs with constant coefficients, the \\tanh ansatz is substituted into the resultant nonlinear ODEs. The coefficients of the solutions in the ansatz are calculated by algebraic computer computations. Two different solutions are obtained for the Sharma-Tasso-Olver equation as only one solution for the potential Kadomtsev-Petviashvili equation. The solution profiles are demonstrated in 3D plots in finite domains of time and space.

  3. Bose Condensation at He-4 Interfaces

    NASA Technical Reports Server (NTRS)

    Draeger, E. W.; Ceperley, D. M.

    2003-01-01

    Path Integral Monte Carlo was used to calculate the Bose-Einstein condensate fraction at the surface of a helium film at T = 0:77 K, as a function of density. Moving from the center of the slab to the surface, the condensate fraction was found to initially increase with decreasing density to a maximum value of 0.9, before decreasing. Long wavelength density correlations were observed in the static structure factor at the surface of the slab. A surface dispersion relation was calculated from imaginary-time density-density correlations. Similar calculations of the superfluid density throughout He-4 droplets doped with linear impurities (HCN)(sub n) are presented. After deriving a local estimator for the superfluid density distribution, we find a decreased superfluid response in the first solvation layer. This effective normal fluid exhibits temperature dependence similar to that of a two-dimensional helium system.

  4. Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2009-09-01

    Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.

  5. San Luis Basin Sustainability Metrics Project: A Methodology for Evaluating Regional Sustainability

    EPA Science Inventory

    Although there are several scientifically-based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. To address these issues, we produced a scientifically-defensible, but straightforward and inexpensive, methodolog...

  6. SU-E-T-603: Analysis of Optical Tracked Head Inter-Fraction Movements Within Masks to Access Intracranial Immobilization Techniques in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Zeidan, O

    2014-06-01

    Purpose: We present a quantitative methodology utilizing an optical tracking system for monitoring head inter-fraction movements within brain masks to assess the effectiveness of two intracranial immobilization techniques. Methods and Materials: A 3-point-tracking method was developed to measure the mask location for a treatment field at each fraction. Measured displacement of mask location to its location at first fraction is equivalent to the head movement within the mask. Head movements for each of treatment fields were measured over about 10 fractions at each patient for seven patients; five treated in supine and two treated in prone. The Q-fix Base-of-Skull headmore » frame was used in supine while the CIVCO uni-frame baseplate was used in prone. Displacements of recoded couch position of each field post imaging at each fraction were extracted for those seven patients. Standard deviation (S.D.) of head movements and couch displacements was scored for statistical analysis. Results: The accuracy of 3PtTrack method was within 1.0 mm by phantom measurements. Patterns of head movement and couch displacement were similar for patients treated in either supine or prone. In superior-inferior direction, mean value of scored standard deviations over seven patients were 1.6 mm and 3.4 mm for the head movement and the couch displacement, respectively. The result indicated that the head movement combined with a loose fixation between the mask-to-head frame results large couch displacements for each patient, and also large variation between patients. However, the head movement is the main cause for the couch displacement with similar magnitude of around 1.0 mm in anterior-posterior and lateral directions. Conclusions: Optical-tracking methodology independently quantifying head movements could improve immobilization devices by correctly acting on causes for head motions within mask. A confidence in the quality of intracranial immobilization techniques could be more efficient by eliminating the need for frequent imaging.« less

  7. Combination of Thin Lenses--A Computer Oriented Method.

    ERIC Educational Resources Information Center

    Flerackers, E. L. M.; And Others

    1984-01-01

    Suggests a method treating geometric optics using a microcomputer to do the calculations of image formation. Calculations are based on the connection between the composition of lenses and the mathematics of fractional linear equations. Logic of the analysis and an example problem are included. (JM)

  8. Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.

    PubMed

    Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T

    2007-03-30

    Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.

  9. Neutron monitoring systems including gamma thermometers and methods of calibrating nuclear instruments using gamma thermometers

    DOEpatents

    Moen, Stephan Craig; Meyers, Craig Glenn; Petzen, John Alexander; Foard, Adam Muhling

    2012-08-07

    A method of calibrating a nuclear instrument using a gamma thermometer may include: measuring, in the instrument, local neutron flux; generating, from the instrument, a first signal proportional to the neutron flux; measuring, in the gamma thermometer, local gamma flux; generating, from the gamma thermometer, a second signal proportional to the gamma flux; compensating the second signal; and calibrating a gain of the instrument based on the compensated second signal. Compensating the second signal may include: calculating selected yield fractions for specific groups of delayed gamma sources; calculating time constants for the specific groups; calculating a third signal that corresponds to delayed local gamma flux based on the selected yield fractions and time constants; and calculating the compensated second signal by subtracting the third signal from the second signal. The specific groups may have decay time constants greater than 5.times.10.sup.-1 seconds and less than 5.times.10.sup.5 seconds.

  10. Modelling Equilibrium and Fractional Crystallization in the System MgO-FeO-CaO-Al2O3-SiO2

    NASA Technical Reports Server (NTRS)

    Herbert, F.

    1985-01-01

    A mathematical modelling technique for use in petrogenesis calculations in the system MgO-FeO-CaO-Al2O3-SiO2 is reported. Semiempirical phase boundary and elemental distribution information was combined with mass balance to compute approximate equilibrium crystallization paths for arbitrary system compositions. The calculation is applicable to a range of system compositions and fractionation calculations are possible. The goal of the calculation is the computation of the composition and quantity of each phase present as a function of the degree of solidification. The degree of solidification is parameterized by the heat released by the solidifying phases. The mathematical requirement for the solution of this problem is: (1) An equation constraining the composition of the magma for each solid phase in equilibrium with the liquidus phase, and (2) an equation for each solid phase and each component giving the distribution of that element between that phase and the magma.

  11. EFFECTS OF TEMPERATURE ON TRICHLOROETHYLENE DESORPTION FROM SILICA GEL AND NATURAL SEDIMENTS. 1. ISOTHERMS. (R822626)

    EPA Science Inventory

    Aqueous phase isotherms were calculated from vapor phase desorption isotherms
    measured at 15, 30, and 60 C for
    trichloroethylene on a silica gel, an aquifer sediment, a soil, a sand fraction,
    and a clay and silt fraction, all at...

  12. Concentrations and apparent digestibility of lignin and carbohydrate fractions in cell walls of whole-crop cereal silages

    USDA-ARS?s Scientific Manuscript database

    Whole-crop cereal silage (WCCS) of oats generally has lower fiber digestibility than WCCS of barley. When investigated more closely, the difference seems mainly to be in the digestibility of the hemicellulosic fraction (HC), where HC is calculated as neutral detergent fibre (NDF) – acid detergent fi...

  13. Resolving Cognitive Conflict in a Realistic Situation with Modeling Characteristics: Coping with a Changing Reference in Fractions

    ERIC Educational Resources Information Center

    Shahbari, Juhaina Awawdeh; Peled, Irit

    2015-01-01

    This study investigates the effect of using a realistic situation with modeling characteristics in creating and resolving a cognitive conflict to promote understanding of a changing reference in fraction calculations. The study was conducted among 96 seventh graders divided into 2 experimental groups and 1 control group. The experimental groups…

  14. TensorCalculator: exploring the evolution of mechanical stress in the CCMV capsid

    NASA Astrophysics Data System (ADS)

    Kononova, Olga; Maksudov, Farkhad; Marx, Kenneth A.; Barsegov, Valeri

    2018-01-01

    A new computational methodology for the accurate numerical calculation of the Cauchy stress tensor, stress invariants, principal stress components, von Mises and Tresca tensors is developed. The methodology is based on the atomic stress approach which permits the calculation of stress tensors, widely used in continuum mechanics modeling of materials properties, using the output from the MD simulations of discrete atomic and C_α -based coarse-grained structural models of biological particles. The methodology mapped into the software package TensorCalculator was successfully applied to the empty cowpea chlorotic mottle virus (CCMV) shell to explore the evolution of mechanical stress in this mechanically-tested specific example of a soft virus capsid. We found an inhomogeneous stress distribution in various portions of the CCMV structure and stress transfer from one portion of the virus structure to another, which also points to the importance of entropic effects, often ignored in finite element analysis and elastic network modeling. We formulate a criterion for elastic deformation using the first principal stress components. Furthermore, we show that von Mises and Tresca stress tensors can be used to predict the onset of a viral capsid’s mechanical failure, which leads to total structural collapse. TensorCalculator can be used to study stress evolution and dynamics of defects in viral capsids and other large-size protein assemblies.

  15. Verification of dosimetric accuracy on the TrueBeam STx: rounded leaf effect of the high definition MLC.

    PubMed

    Kielar, Kayla N; Mok, Ed; Hsu, Annie; Wang, Lei; Luxton, Gary

    2012-10-01

    The dosimetric leaf gap (DLG) in the Varian Eclipse treatment planning system is determined during commissioning and is used to model the effect of the rounded leaf-end of the multileaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and account for inherent leakage between leaf tips. With the increased use of single fraction high dose treatments requiring larger monitor units comes an enhanced concern in the accuracy of leakage calculations, as it accounts for much of the patient dose. This study serves to verify the dosimetric accuracy of the algorithm used to model the rounded leaf effect for the TrueBeam STx, and describes a methodology for determining best-practice parameter values, given the novel capabilities of the linear accelerator such as flattening filter free (FFF) treatments and a high definition MLC (HDMLC). During commissioning, the nominal MLC position was verified and the DLG parameter was determined using MLC-defined field sizes and moving gap tests, as is common in clinical testing. Treatment plans were created, and the DLG was optimized to achieve less than 1% difference between measured and calculated dose. The DLG value found was tested on treatment plans for all energies (6 MV, 10 MV, 15 MV, 6 MV FFF, 10 MV FFF) and modalities (3D conventional, IMRT, conformal arc, VMAT) available on the TrueBeam STx. The DLG parameter found during the initial MLC testing did not match the leaf gap modeling parameter that provided the most accurate dose delivery in clinical treatment plans. Using the physical leaf gap size as the DLG for the HDMLC can lead to 5% differences in measured and calculated doses. Separate optimization of the DLG parameter using end-to-end tests must be performed to ensure dosimetric accuracy in the modeling of the rounded leaf ends for the Eclipse treatment planning system. The difference in leaf gap modeling versus physical leaf gap dimensions is more pronounced in the more recent versions of Eclipse for both the HDMLC and the Millennium MLC. Once properly commissioned and tested using a methodology based on treatment plan verification, Eclipse is able to accurately model radiation dose delivered for SBRT treatments using the TrueBeam STx.

  16. The costs of nurse turnover, part 2: application of the Nursing Turnover Cost Calculation Methodology.

    PubMed

    Jones, Cheryl Bland

    2005-01-01

    This is the second article in a 2-part series focusing on nurse turnover and its costs. Part 1 (December 2004) described nurse turnover costs within the context of human capital theory, and using human resource accounting methods, presented the updated Nursing Turnover Cost Calculation Methodology. Part 2 presents an application of this method in an acute care setting and the estimated costs of nurse turnover that were derived. Administrators and researchers can use these methods and cost information to build a business case for nurse retention.

  17. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  18. A comparison between the multimedia fate and exposure models CalTOX and uniform system for evaluation of substances adapted for life-cycle assessment based on the population intake fraction of toxic pollutants.

    PubMed

    Huijbregts, Mark A J; Geelen, Loes M J; Hertwich, Edgar G; McKone, Thomas E; van de Meent, Dik

    2005-02-01

    In life-cycle assessment (LCA) and comparative risk assessment, potential human exposure to toxic pollutants can be expressed as the population intake fraction (iF), which represents the fraction of the quantity emitted that enters the human population. To assess the influence of model differences in the calculation of the population iF ingestion and inhalation iFs of 365 substances emitted to air, freshwater, and soil were calculated with two commonly applied multimedia fate and exposure models, CalTOX and the uniform system for evaluation of substances adapted for life-cycle assessment (USES-LCA). The model comparison showed that differences in the iFs due to model choices were the lowest after emission to air and the highest after emission to soil. Inhalation iFs were more sensitive to model differences compared to ingestion iFs. The choice for a continental seawater compartment, vertical stratification of the soil compartment, rain and no-rain scenarios, and drinking water purification mainly clarify the relevant model differences found in population iFs. Furthermore, pH correction of chemical properties and aerosol-associated deposition on plants appeared to be important for dissociative organics and metals emitted to air, respectively. Finally, it was found that quantitative structure-activity relationship estimates for superhydrophobics may introduce considerable uncertainty in the calculation of population intake fractions.

  19. Wiener-Hopf optimal control of a hydraulic canal prototype with fractional order dynamics.

    PubMed

    Feliu-Batlle, Vicente; Feliu-Talegón, Daniel; San-Millan, Andres; Rivas-Pérez, Raúl

    2017-06-26

    This article addresses the control of a laboratory hydraulic canal prototype that has fractional order dynamics and a time delay. Controlling this prototype is relevant since its dynamics closely resembles the dynamics of real main irrigation canals. Moreover, the dynamics of hydraulic canals vary largely when the operation regime changes since they are strongly nonlinear systems. All this makes difficult to design adequate controllers. The controller proposed in this article looks for a good time response to step commands. The design criterium for this controller is minimizing the integral performance index ISE. Then a new methodology to control fractional order processes with a time delay, based on the Wiener-Hopf control and the Padé approximation of the time delay, is developed. Moreover, in order to improve the robustness of the control system, a gain scheduling fractional order controller is proposed. Experiments show the adequate performance of the proposed controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Correcting Velocity Dispersions of Dwarf Spheroidal Galaxies for Binary Orbital Motion

    NASA Astrophysics Data System (ADS)

    Minor, Quinn E.; Martinez, Greg; Bullock, James; Kaplinghat, Manoj; Trainor, Ryan

    2010-10-01

    We show that the measured velocity dispersions of dwarf spheroidal galaxies from about 4 to 10 km s-1 are unlikely to be inflated by more than 30% due to the orbital motion of binary stars and demonstrate that the intrinsic velocity dispersions can be determined to within a few percent accuracy using two-epoch observations with 1-2 yr as the optimal time interval. The crucial observable is the threshold fraction—the fraction of stars that show velocity changes larger than a given threshold between measurements. The threshold fraction is tightly correlated with the dispersion introduced by binaries, independent of the underlying binary fraction and distribution of orbital parameters. We outline a simple procedure to correct the velocity dispersion to within a few percent accuracy by using the threshold fraction and provide fitting functions for this method. We also develop a methodology for constraining properties of binary populations from both single- and two-epoch velocity measurements by including the binary velocity distribution in a Bayesian analysis.

  1. Sci-Thur PM – Colourful Interactions: Highlights 01: Design to delivery of spatially fractionated mini-beam canine radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Andrew; Crewson, Cody; Davis, William

    Spatial fractionation of radiation using arrays of narrow parallel micro-planar beams (less than 1 mm), is a relatively new concept with many unknowns specifically within the underlying biology of cell death. A tungsten collimator has been designed to produce mini-beams with a Varian linear accelerator for translational animal research into the effectiveness of spatial fractionation mini-beam radiotherapy (MBRT). This work presents the treatment planning process and workflow for the application of MBRT treatments within a clinical study. For patient dose calculations, the MBRT collimator was incorporated into a Monte Carlo based treatment planning system called MMCTP. Treatment planning was splitmore » between Eclipse and MMCTP, as the field apertures were determined within Eclipse prior to being sent to MMCTP for dose calculations. The calculated plan was transferred back into Aria with updated MUs per field for patient treatment. Patients were positioned within a vac-lock bag lying prone with a bite block and a thermoplastic mask to immobilize the head. Prior to treatment, a delivery verification plan was created within MMCTP. DQA output measurements of the treatment fields agreed with the calculated dose to within 1.5%. We have presented a workflow for MBRT treatments that include the planning technique, dose calculation method, DQA process and data integration into a record and verify system. The clinical study following this workflow represent the first series of linac based MBRT patients and depending on the clinical outcome of the study, our technique could be applied to human MBRT treatments.« less

  2. An approach to the parametric design of ion thrusters

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.; Beattie, John R.; Hyman, Jay, Jr.

    1988-01-01

    A methodology that can be used to determine which of several physical constraints can limit ion thruster power and thrust, under various design and operating conditions, is presented. The methodology is exercised to demonstrate typical limitations imposed by grid system span-to-gap ratio, intragrid electric field, discharge chamber power per unit beam area, screen grid lifetime, and accelerator grid lifetime constraints. Limitations on power and thrust for a thruster defined by typical discharge chamber and grid system parameters when it is operated at maximum thrust-to-power are discussed. It is pointed out that other operational objectives such as optimization of payload fraction or mission duration can be substituted for the thrust-to-power objective and that the methodology can be used as a tool for mission analysis.

  3. A Lagrangian parcel based mixing plane method for calculating water based mixed phase particle flows in turbo-machinery

    NASA Astrophysics Data System (ADS)

    Bidwell, Colin S.

    2015-05-01

    A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).

  4. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  5. Stable Te isotope fractionation in tellurium-bearing minerals from precious metal hydrothermal ore deposits

    NASA Astrophysics Data System (ADS)

    Fornadel, Andrew P.; Spry, Paul G.; Haghnegahdar, Mojhgan A.; Schauble, Edwin A.; Jackson, Simon E.; Mills, Stuart J.

    2017-04-01

    The tellurium isotope compositions of naturally-occurring tellurides, native tellurium, and tellurites were measured by multicollector-inductively coupled plasma-mass spectrometry (MC-ICP-MS) and compared to theoretical values for equilibrium mass-dependent isotopic fractionation of representative Te-bearing species estimated with first-principles thermodynamic calculations. Calculated fractionation models suggest that 130/125Te fractionations as large as 4‰ occur at 100 °C between coexisting tellurates (Te VI) and tellurides (Te -II) or or native tellurium Te(0), and smaller, typically <1‰, fractionations occur between coexisting Te(-I) or Te(-II) (Au,Ag)Te2 minerals (i.e., calaverite, krennerite) and (Au,Ag)2Te minerals (i.e., petzite, hessite). In general, heavyTe/lightTe is predicted to be higher for more oxidized species, and lower for reduced species. Tellurides in the system Au-Ag-Te and native tellurium analyzed in this study have values of δ130/125Te = -1.54‰ to 0.44‰ and δ130/125Te = -0.74‰ to 0.16‰, respectively, whereas those for tellurites (tellurite, paratellurite, emmonsite and poughite) range from δ130/125Te = -1.58‰ to 0.59‰. Thus, the isotopic composition for both oxidized and reduced species are broadly coincident. Calculations of per mil isotopic variation per amu for each sample suggest that mass-dependent processes are responsible for fractionation. In one sample of coexisting primary native tellurium and secondary emmonsite, δ130/125Te compositions were identical. The coincidence of δ130/125Te between all oxidized and reduced species in this study and the apparent lack of isotopic fractionation between native tellurium and emmonsite in one sample suggest that oxidation processes cause little to no fractionation. Because Te is predominantly transported as an oxidized aqueous phase or as a reduced vapor phase under hydrothermal conditions, either a reduction of oxidized Te in hydrothermal liquids or deposition of Te from a reduced vapor to a solid is necessary to form the common tellurides and native tellurium in ore-forming systems. Our data suggest that these sorts of reactions during mineralization may account for a ∼3‰ range of δ130/125Te values. Based on the data ranges for Te minerals from various ore deposits, the underpinning geologic processes responsible for mineralization seem to have primary control on the magnitude of fractionation, with tellurides in epithermal gold deposits showing a narrower range of isotope values than those in orogenic gold and volcanogenic massive sulfide deposits.

  6. Equilibrium 2H/1H fractionation in organic molecules: III. Cyclic ketones and hydrocarbons

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Sessions, Alex L.; Nielsen, Robert J.; Goddard, William A.

    2013-04-01

    Quantitative interpretation of stable hydrogen isotope ratios (2H/1H) in organic compounds is greatly aided by knowledge of the relevant equilibrium fractionation factors (ɛeq). Previous efforts have combined experimental measurements and hybrid Density Functional Theory (DFT) calculations to accurately predict equilibrium fractionations in linear (acyclic) organic molecules (Wang et al., 2009a,b), but the calibration produced by that study is not applicable to cyclic compounds. Here we report experimental measurements of equilibrium 2H/1H fractionation in six cyclic ketones, and use those data to evaluate DFT calculations of fractionation in diverse monocyclic and polycyclic compounds commonly found in sedimentary organic matter and petroleum. At 25, 50, and 75 °C, the experimentally measured ɛeq values for secondary and tertiary Hα in isotopic equilibrium with water are in the ranges of -130‰ to -150‰ and +10‰ to -40‰ respectively. Measured data are similar to DFT calculations of ɛeq for axial Hα but not equatorial Hα. In tertiary Cα positions with methyl substituents, this can be understood as a result of the methyl group forcing Hα atoms into a dominantly axial position. For secondary Cα positions containing both axial and equatorial Hα atoms, we propose that axial Hα exchanges with water significantly faster than the equatorial Hα does, due to the hyperconjugation-stabilized transition state. Interconversion of axial and equatorial positions via ring flipping is much faster than isotopic exchange at either position, and as a result the steady-state isotopic composition of both H's is strongly weighted toward that of axial Hα. Based on comparison with measured ɛeq values, a total uncertainty of 10-30‰ remains for theoretical ɛeq values. Using DFT, we systematically estimated the ɛeq values for individual H positions in various cyclic structures. By summing over all individual H positions, the molecular equilibrium fractionation was estimated to be -75‰ to -95‰ for steroids, -90‰ to -105‰ for hopanoids, and -65‰ to -100‰ for typical cycloparaffins between 0 and 100 °C relative to water. These are distinct from the typical biosynthetic fractionations of -150‰ to -300‰, but are similar to equilibrium fractionations for linear hydrocarbons (Wang et al., 2009b). Thus post-burial H exchange will generally remove the ˜50-100‰ biosynthetic fractionations between cyclic isoprenoid and n-alkyl lipid molecules, which can be used to evaluate the extent of H exchange in sedimentary organic matter and oils.

  7. Irregular-Mesh Terrain Analysis and Incident Solar Radiation for Continuous Hydrologic Modeling in Mountain Watersheds

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.

    2016-12-01

    This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.

  8. Calculation of Dynamic Loads Due to Random Vibration Environments in Rocket Engine Systems

    NASA Technical Reports Server (NTRS)

    Christensen, Eric R.; Brown, Andrew M.; Frady, Greg P.

    2007-01-01

    An important part of rocket engine design is the calculation of random dynamic loads resulting from internal engine "self-induced" sources. These loads are random in nature and can greatly influence the weight of many engine components. Several methodologies for calculating random loads are discussed and then compared to test results using a dynamic testbed consisting of a 60K thrust engine. The engine was tested in a free-free condition with known random force inputs from shakers attached to three locations near the main noise sources on the engine. Accelerations and strains were measured at several critical locations on the engines and then compared to the analytical results using two different random response methodologies.

  9. Relating the 2010 signalized intersection methodology to alternate approaches in the context of NYC conditions.

    DOT National Transportation Integrated Search

    2013-11-01

    The Highway Capacity Manual (HCM) has had a delay-based level of service methodology for signalized intersections since 1985. : The 2010 HCM has revised the method for calculating delay. This happened concurrent with such jurisdictions as NYC reviewi...

  10. Accurate prediction of retention in hydrophilic interaction chromatography by back calculation of high pressure liquid chromatography gradient profiles.

    PubMed

    Wang, Nu; Boswell, Paul G

    2017-10-20

    Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase we used, column distortion is an important factor to take into account in retention projection in HILIC that is not usually important in RPLC. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Assessing sources of airborne mineral dust and other aerosols, in Iraq

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Johann P.; Jayanty, R. K. M.

    2013-06-01

    Most airborne particulate matter in Iraq comes from mineral dust sources. This paper describes the statistics and modeling of chemical results, specifically those from Teflon® filter samples collected at Tikrit, Balad, Taji, Baghdad, Tallil and Al Asad, in Iraq, in 2006/2007. Methodologies applied to the analytical results include calculation of correlation coefficients, Principal Components Analysis (PCA), and Positive Matrix Factorization (PMF) modeling. PCA provided a measure of the covariance within the data set, thereby identifying likely point sources and events. These include airborne mineral dusts of silicate and carbonate minerals, gypsum and salts, as well as anthropogenic sources of metallic fumes, possibly from battery smelting operations, and emissions of leaded gasoline vehicles. Five individual PMF factors (source categories) were modeled, four of which being assigned to components of geological dust, and the fifth to gasoline vehicle emissions together with battery smelting operations. The four modeled geological components, dust-siliceous, dust-calcic, dust-gypsum, and evaporate occur in variable ratios for each site and size fraction (TSP, PM10, and PM2.5), and also vary by season. In general, Tikrit and Taji have the largest and Al Asad the smallest percentages of siliceous dust. In contrast, Al Asad has the largest proportion of gypsum, in part representing the gypsiferous soils in that region. Baghdad has the highest proportions of evaporite in both size fractions, ascribed to the highly salinized agricultural soils, following millennia of irrigation along the Tigris River valley. Although dust storms along the Tigris and Euphrates River valleys originate from distal sources, the mineralogy bears signatures of local soils and air pollutants.

  12. On the kinematics of a runaway Be star population

    NASA Astrophysics Data System (ADS)

    Boubert, D.; Evans, N. W.

    2018-07-01

    We explore the hypothesis that B-type emission-line stars (Be stars) have their origin in mass-transfer binaries by measuring the fraction of runaway Be stars. We assemble the largest-to-date catalogue of 632 Be stars with 6D kinematics, exploiting the precise astrometry of the Tycho-Gaia Astrometric Solution from the first Gaia data release. Using binary stellar evolution simulations, we make predictions for the runaway and equatorial rotation velocities of a runaway Be star population. Accounting for observational biases, we calculate that if all classical Be stars originated through mass transfer in binaries, then 17.5 per cent of the Be stars in our catalogue should be runaways. The remaining 82.5 per cent should be in binaries with subdwarfs, white dwarfs, or neutron stars, because those systems either remained bound post-supernova or avoided the supernova entirely. Using a Bayesian methodology, we compare the hypothesis that each Be star in our catalogue is a runaway to the null hypothesis that it is a member of the Milky Way disc. We find that 13.1^{+2.6}_{-2.4} per cent of the Be stars in our catalogue are runaways and identify a subset of 40 high-probability runaways. We argue that deficiencies in our understanding of binary stellar evolution, as well as the degeneracy between velocity dispersion and number of runaway stars, can explain the slightly lower runaway fraction. We thus conclude that all Be stars could be explained by an origin in mass-transfer binaries. This conclusion is testable with the second Gaia data release (DR2).

  13. On the kinematics of a runaway Be star population

    NASA Astrophysics Data System (ADS)

    Boubert, D.; Evans, N. W.

    2018-04-01

    We explore the hypothesis that B type emission-line stars (Be stars) have their origin in mass-transfer binaries by measuring the fraction of runaway Be stars. We assemble the largest-to-date catalogue of 632 Be stars with 6D kinematics, exploiting the precise astrometry of the Tycho-Gaia Astrometric Solution (TGAS) from the first Gaia Data Release. Using binary stellar evolution simulations, we make predictions for the runaway and equatorial rotation velocities of a runaway Be star population. Accounting for observational biases, we calculate that if all classical Be stars originated through mass transfer in binaries, then 17.5% of the Be stars in our catalogue should be runaways. The remaining 82.5% should be in binaries with subdwarfs, white dwarfs or neutron stars, because those systems either remained bound post-supernova or avoided the supernova entirely. Using a Bayesian methodology, we compare the hypothesis that each Be star in our catalogue is a runaway to the null hypothesis that it is a member of the Milky Way disc. We find that 13.1^{+2.6}_{-2.4}% of the Be stars in our catalogue are runaways, and identify a subset of 40 high-probability runaways. We argue that deficiencies in our understanding of binary stellar evolution, as well as the degeneracy between velocity dispersion and number of runaway stars, can explain the slightly lower runaway fraction. We thus conclude that all Be stars could be explained by an origin in mass-transfer binaries. This conclusion is testable with the second Gaia data release (DR2).

  14. USGS Methodology for Assessing Continuous Petroleum Resources

    USGS Publications Warehouse

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  15. Error Patterns in Portuguese Students' Addition and Subtraction Calculation Tasks: Implications for Teaching

    ERIC Educational Resources Information Center

    Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon

    2018-01-01

    Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…

  16. Measurements and Modeling of Soot Formation and Radiation in Microgravity Jet Diffusion Flames. Volume 4

    NASA Technical Reports Server (NTRS)

    Ku, Jerry C.; Tong, Li; Greenberg, Paul S.

    1996-01-01

    This is a computational and experimental study for soot formation and radiative heat transfer in jet diffusion flames under normal gravity (1-g) and microgravity (0-g) conditions. Instantaneous soot volume fraction maps are measured using a full-field imaging absorption technique developed by the authors. A compact, self-contained drop rig is used for microgravity experiments in the 2.2-second drop tower facility at NASA Lewis Research Center. On modeling, we have coupled flame structure and soot formation models with detailed radiation transfer calculations. Favre-averaged boundary layer equations with a k-e-g turbulence model are used to predict the flow field, and a conserved scalar approach with an assumed Beta-pdf are used to predict gaseous species mole fraction. Scalar transport equations are used to describe soot volume fraction and number density distributions, with formation and oxidation terms modeled by one-step rate equations and thermophoretic effects included. An energy equation is included to couple flame structure and radiation analyses through iterations, neglecting turbulence-radiation interactions. The YIX solution for a finite cylindrical enclosure is used for radiative heat transfer calculations. The spectral absorption coefficient for soot aggregates is calculated from the Rayleigh solution using complex refractive index data from a Drude- Lorentz model. The exponential-wide-band model is used to calculate the spectral absorption coefficient for H20 and C02. It is shown that when compared to results from true spectral integration, the Rosseland mean absorption coefficient can provide reasonably accurate predictions for the type of flames studied. The soot formation model proposed by Moss, Syed, and Stewart seems to produce better fits to experimental data and more physically sound than the simpler model by Khan et al. Predicted soot volume fraction and temperature results agree well with published data for a normal gravity co-flow laminar flames and turbulent jet flames. Predicted soot volume fraction results also agree with our data for 1-g and 0-g laminar jet names as well as 1-g turbulent jet flames.

  17. Evaluation and mitigation of the interplay effects for intensity modulated proton therapy for lung cancer in a clinical setting

    PubMed Central

    Kardar, Laleh; Li, Yupeng; Li, Xiaoqiang; Li, Heng; Cao, Wenhua; Chang, Joe Y.; Liao, Li; Zhu, Ronald X.; Sahoo, Narayan; Gillin, Michael; Liao, Zhongxing; Komaki, Ritsuko; Cox, James D.; Lim, Gino; Zhang, Xiaodong

    2015-01-01

    Purpose The primary aim of this study was to evaluate the impact of interplay effects for intensity-modulated proton therapy (IMPT) plans for lung cancer in the clinical setting. The secondary aim was to explore the technique of iso-layered re-scanning for mitigating these interplay effects. Methods and Materials Single-fraction 4D dynamic dose without considering re-scanning (1FX dynamic dose) was used as a metric to determine the magnitude of dosimetric degradation caused by 4D interplay effects. The 1FX dynamic dose was calculated by simulating the machine delivery processes of proton spot scanning on moving patient described by 4D computed tomography (4DCT) during the IMPT delivery. The dose contributed from an individual spot was fully calculated on the respiratory phase corresponding to the life span of that spot, and the final dose was accumulated to a reference CT phase by using deformable image registration. The 1FX dynamic dose was compared with the 4D composite dose. Seven patients with various tumor volumes and motions were selected. Results The CTV prescription coverage for the 7 patients were 95.04%, 95.38%, 95.39%, 95.24%, 95.65%, 95.90%, and 95.53%, calculated with use of the 4D composite dose, and were 89.30%, 94.70%, 85.47%, 94.09%, 79.69%, 91.20%, and 94.19% with use of the 1FX dynamic dose. For the 7 patients, the CTV coverage, calculated by using single-fraction dynamic dose, were 95.52%, 95.32%, 96.36%, 95.28%, 94.32%, 95.53%, and 95.78%, using maximum MU limit value of 0.005. In other words, by increasing the number of delivered spots in each fraction, the degradation of CTV coverage improved up to 14.6%. Conclusions Single-fraction 4D dynamic dose without re-scanning was validated as a surrogate to evaluate the interplay effects for IMPT for lung cancer in the clinical setting. The interplay effects can be potentially mitigated by increasing the number of iso-layered re-scanning in each fraction delivery. PMID:25407877

  18. Uranium-lead isotope systematics of Mars inferred from the basaltic shergottite QUE 94201

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaffney, A M; Borg, L E; Connelly, J N

    2006-12-22

    Uranium-lead ratios (commonly represented as {sup 238}U/{sup 204}Pb = {mu}) calculated for the sources of martian basalts preserve a record of petrogenetic processes that operated during early planetary differentiation and formation of martian geochemical reservoirs. To better define the range of {mu} values represented by the source regions of martian basalts, we completed U-Pb elemental and isotopic analyses on whole rock, mineral and leachate fractions from the martian meteorite Queen Alexandra Range 94201 (QUE 94201). The whole rock and silicate mineral fractions have unradiogenic Pb isotopic compositions that define a narrow range ({sup 206}Pb/{sup 204}Pb = 11.16-11.61). In contrast, themore » Pb isotopic compositions of weak HCl leachates are more variable and radiogenic. The intersection of the QUE 94201 data array with terrestrial Pb in {sup 206}Pb/{sup 204}Pb-{sup 207}Pb/{sup 204}Pb-{sup 208}Pb/{sup 204}Pb compositional space is consistent with varying amounts of terrestrial contamination in these fractions. We calculate that only 1-7% contamination is present in the purified silicate mineral and whole rock fractions, whereas the HCl leachates contain up to 86% terrestrial contamination. Despite the contamination, we are able to use the U-Pb data to determine the initial {sup 206}Pb/{sup 204}Pb of QUE 94201 (11.086 {+-} 0.008) and calculate the {mu} value of the QUE 94201 mantle source to be 1.823 {+-} 0.008. This is the lowest {mu} value calculated for any martian basalt source, and, when compared to the highest values determined for martian basalt sources, indicates that {mu} values in martian source reservoirs vary by at least 100%. The range of source {mu} values further indicates that the {mu} value of bulk silicate Mars is approximately three. The amount of variation in the {mu} values of the mantle sources ({mu} {approx} 2-4) is greater than can be explained by igneous processes involving silicate phases alone. We suggest the possibility that a small amount of sulfide crystallization may generate large extents of U-Pb fractionation during formation of the mantle sources of martian basalts.« less

  19. Calculating the habitable zones of multiple star systems with a new interactive Web site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Tobias W. A.; Haghighipour, Nader

    We have developed a comprehensive methodology and an interactive Web site for calculating the habitable zone (HZ) of multiple star systems. Using the concept of spectral weight factor, as introduced in our previous studies of the calculations of HZ in and around binary star systems, we calculate the contribution of each star (based on its spectral energy distribution) to the total flux received at the top of the atmosphere of an Earth-like planet, and use the models of the HZ of the Sun to determine the boundaries of the HZ in multiple star systems. Our interactive Web site for carryingmore » out these calculations is publicly available at http://astro.twam.info/hz. We discuss the details of our methodology and present its application to some of the multiple star systems detected by the Kepler space telescope. We also present the instructions for using our interactive Web site, and demonstrate its capabilities by calculating the HZ for two interesting analytical solutions of the three-body problem.« less

  20. The generalised isodamping approach for robust fractional PID controllers design

    NASA Astrophysics Data System (ADS)

    Beschi, M.; Padula, F.; Visioli, A.

    2017-06-01

    In this paper, we present a novel methodology to design fractional-order proportional-integral-derivative controllers. Based on the description of the controlled system by means of a family of linear models parameterised with respect to a free variable that describes the real process operating point, we design the controller by solving a constrained min-max optimisation problem where the maximum sensitivity has to be minimised. Among the imposed constraints, the most important one is the new generalised isodamping condition, that defines the invariancy of the phase margin with respect to the free parameter variations. It is also shown that the well-known classical isodamping condition is a special case of the new technique proposed in this paper. Simulation results show the effectiveness of the proposed technique and the superiority of the fractional-order controller compared to its integer counterpart.

  1. Molluscicidal activity of Physalis angulata L. extracts and fractions on Biomphalaria tenagophila (d'Orbigny, 1835) under laboratory conditions.

    PubMed

    dos Santos, José Augusto A; Tomassini, Therezinha Coelho B; Xavier, Deise Cristina Drummond; Ribeiro, Ivone Maria; da Silva, Melissa Teixeira G; de Morais Filho, Zenildo Buarque

    2003-04-01

    The main objective of this research is to evaluate the molluscicide activity of Physalis angulata L. Biomphalaria tenagophila specimens under laboratory conditions. Extracts and fractions were supplied by the Laborat rio de Qu mica de Produtos Naturais, Farmanguinhos-Fiocruz. Experiments were performed according to the methodology described by the World Health Organization for molluscicide tests using the concentrations from 0.1 to 500 mg/l of the extracts, fractions and of a pool of physalins modified steroids present in this species. The results show that ethyl acetate and acetone extracts from the whole plant, the ethanolic extracts of the roots and the physalins pool from stems and leaves were active. Only the whole plant extracts were available in sufficient quantity for the determination of LD50 and LD90 values.

  2. A VaR Algorithm for Warrants Portfolio

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong

    Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.

  3. SU-F-J-187: The Statistical NTCP and TCP Models in the Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, S; Frometa, T; Pyakuryal, A

    2016-06-15

    Purpose: The statistical models (SM) are typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known. The normal tissue complications and tumor control are frequently stochastic effects in the Radiotherapy (RT). Based on probabilistic treatments, it recently has been formulated new NTCP and TCP models for the RT. Investigating the particular requirements for their clinical use in the proton therapy (PT) is the goal of this work. Methods: The SM can be used as phenomenological or mechanistic models. The former way allowsmore » fitting real data and getting theirparameters. In the latter one, we should do efforts for determining the parameters through the acceptable estimations, measurements, and/or simulation experiments. Experimental methodologies for determination of the parameters have been developed from the fraction cells surviving the proton irradiation curves in tumor and OAR, and precise RBE models are used for calculating the variable of effective dose. As the executions of these methodologies have a high costs, so we have developed computer tools enable to perform simulation experiments as complement to limitations of the real ones. Results: The requirements for the use of the SM in the PT, such as validation and improvement of the elaborated and existent methodologies for determining the SM parameters and effective dose respectively, were determined. Conclusion: The SM realistically simulates the main processes in the PT, and for this reason these can be implemented in this therapy, which are simples, computable and they have other advantages over some current models. It has been determined some negative aspects for some currently used probabilistic models in the RT, like the LKB NTCP and others derived from logistic functions; which can be improved with the proposed methods in this study.« less

  4. Experimental methodology for assessing the environmental fate of organic chemicals in polymer matrices using column leaching studies and OECD 308 water/sediment systems: Application to tire and road wear particles.

    PubMed

    Unice, Kenneth M; Bare, Jennifer L; Kreider, Marisa L; Panko, Julie M

    2015-11-15

    Automobile tires require functional rubber additives including curing agents and antioxidants, which are potentially environmentally available from tire and road wear particles (TRWP) deposited in soil and sediment. A novel methodology was employed to evaluate the environmental fate of three commonly-used tire chemicals (N-cyclohexylbenzothiazole-2-sulfenamide (CBS), N-(1,3-dimethylbutyl)-N'-phenyl-1,4-phenylenediamine (6-PPD) and 1,3-diphenylguanidine (DPG)), using a road simulator, an artificial weathering chamber, column leaching tests, and OECD 308 sediment/water incubator studies. Environmental release factors were quantified for curing (f(C)), tire wear (f(W)), terrestrial weathering (f(S)), leaching from TRWP (f(L)), and environmental availability from TRWP (f(A)) by liquid chromatography-tandem mass spectroscopy (LC/MS/MS) analyses. Cumulative fractions representing total environmental availability (F(T)) and release to water (FR) were calculated for the tire chemicals and 13 transformation products. F(T) for CBS, DPG and 6-PPD inclusive of transformation products for an accelerated terrestrial aging time in soil of 0.1 years was 0.08, 0.1, and 0.06, respectively (equivalent to 6 to 10% of formulated mass). In contrast, a wider range of 5.5×10(-4) (6-PPD) to 0.06 (CBS) was observed for F(R) at an accelerated age of 0.1 years, reflecting the importance of hydrophobicity and solubility for determining the release to the water phase. Significant differences (p<0.05) in the weathering factor, f(S), were observed when chemicals were categorized by boiling point or hydrolysis rate constant. A significant difference in the leaching factor, f(L), and environmental availability factor, f(A), was also observed when chemicals were categorized by log K(ow). Our methodology should be useful for lifecycle analysis of other functional polymer chemicals. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A modeling approach to account for toxicokinetic interactions in the calculation of biological hazard index for chemical mixtures.

    PubMed

    Haddad, S; Tardif, R; Viau, C; Krishnan, K

    1999-09-05

    Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.

  6. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  7. Proposed Methodology for Design of Carbon Fiber Reinforced Polymer Spike Anchors into Reinforced Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Eric Robert

    The included methodology, calculations, and drawings support design of Carbon Fiber Reinforced Polymer (CFRP) spike anchors for securing U-wrap CFRP onto reinforced concrete Tbeams. This content pertains to an installation in one of Los Alamos National Laboratory’s facilities. The anchors are part of a seismic rehabilitation to the subject facility. The information contained here is for information purposes only. The reader is encouraged to verify all equations, details, and methodology prior to usage in future projects. However, development of the content contained here complied with Los Alamos National Laboratory’s NQA-1 quality assurance program for nuclear structures. Furthermore, the formulations andmore » details came from the referenced published literature. This literature represents the current state of the art for FRP anchor design. Construction personnel tested the subject anchor design to the required demand level demonstrated in the calculation. The testing demonstrated the ability of the anchors noted to carry loads in excess of 15 kips in direct tension. The anchors were not tested to failure in part because of the hazards associated with testing large-capacity tensile systems to failure. The calculation, methodology, and drawing originator was Eric MacFarlane of Los Alamos National Laboratory’s (LANL) Office of Seismic Hazards and Risk Mitigation (OSHRM). The checker for all components was Mike Salmon of the LANL OSHRM. The independent reviewers of all components were Insung Kim and Loring Wyllie of Degenkolb Engineers. Note that Insung Kim contributed to the initial formulations in the calculations that pertained directly to his Doctoral research.« less

  8. Measuring fraction of intercepted photosynthetically active radiation with a ceptometer: the importance of adopting a universal methodological approach

    USDA-ARS?s Scientific Manuscript database

    It is desirable to be able to predict above ground biomass production indirectly, without extensive sampling or destructive harvesting. Leaf area index (LAI) is the amount of leaf surface area per ground area and is an important parameter in ecophysiology. As LAI increases, the photosynthetically ...

  9. A methodology to estimate vehicle miles traveled (VMT) fractions as an input to the mobile emission model.

    DOT National Transportation Integrated Search

    2006-01-01

    Air quality has been an issue of growing importance to the transportation sector since the enactment of the Clean Air Act Amendments of 1990 and the Transportation Equity Act for the 21st Century in 1998. According to these acts, states and local gov...

  10. A PILOT STUDY OF THE INFLUENCE OF RESIDENTIAL HAC DUTY CYCLE ON INDOOR AIR QUALITY (AE)

    EPA Science Inventory

    A simple methodology was developed to collect measurements of duty cycle, the fraction of time the heating and air conditioning (HAC) system was operating, inside residences. The primary purpose of the measurements was to assess whether the HAC duty cycle was related to reductio...

  11. Reaction Order Ambiguity in Integrated Rate Plots

    ERIC Educational Resources Information Center

    Lee, Joe

    2008-01-01

    Integrated rate plots are frequently used in reaction kinetics to determine orders of reactions. It is often emphasised, when using this methodology in practice, that it is necessary to monitor the reaction to a substantial fraction of completion for these plots to yield unambiguous orders. The present article gives a theoretical and statistical…

  12. Parameter Estimation of Fractional-Order Chaotic Systems by Using Quantum Parallel Particle Swarm Optimization Algorithm

    PubMed Central

    Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng

    2015-01-01

    Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158

  13. Strategies for the extraction and analysis of non-extractable polyphenols from plants.

    PubMed

    Domínguez-Rodríguez, Gloria; Marina, María Luisa; Plaza, Merichel

    2017-09-08

    The majority of studies based on phenolic compounds from plants are focused on the extractable fraction derived from an aqueous or aqueous-organic extraction. However, an important fraction of polyphenols is ignored due to the fact that they remain retained in the residue of extraction. They are the so-called non-extractable polyphenols (NEPs) which are high molecular weight polymeric polyphenols or individual low molecular weight phenolics associated to macromolecules. The scarce information available about NEPs shows that these compounds possess interesting biological activities. That is why the interest about the study of these compounds has been increasing in the last years. Furthermore, the extraction and characterization of NEPs are considered a challenge because the developed analytical methodologies present some limitations. Thus, the present literature review summarizes current knowledge of NEPs and the different methodologies for the extraction of these compounds, with a particular focus on hydrolysis treatments. Besides, this review provides information on the most recent developments in the purification, separation, identification and quantification of NEPs from plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  15. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  16. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  17. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  18. Effect of intra-fraction motion on the accumulated dose for free-breathing MR-guided stereotactic body radiation therapy of renal-cell carcinoma

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Glitzner, Markus; Kontaxis, Charis; de Senneville, Baudouin Denis; Prins, Fieke M.; Crijns, Sjoerd P. M.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.; Tijssen, Rob H. N.

    2017-09-01

    Stereotactic body radiation therapy (SBRT) has shown great promise in increasing local control rates for renal-cell carcinoma (RCC). Characterized by steep dose gradients and high fraction doses, these hypo-fractionated treatments are, however, prone to dosimetric errors as a result of variations in intra-fraction respiratory-induced motion, such as drifts and amplitude alterations. This may lead to significant variations in the deposited dose. This study aims to develop a method for calculating the accumulated dose for MRI-guided SBRT of RCC in the presence of intra-fraction respiratory variations and determine the effect of such variations on the deposited dose. For this, RCC SBRT treatments were simulated while the underlying anatomy was moving, based on motion information from three motion models with increasing complexity: (1) STATIC, in which static anatomy was assumed, (2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, and (3) PCA, a method that generates 3D volumes with sufficient spatio-temporal resolution to capture respiration and intra-fraction variations. Five RCC patients and two volunteers were included and treatments delivery was simulated, using motion derived from subject-specific MR imaging. Motion was most accurately estimated using the PCA method with root-mean-squared errors of 2.7, 2.4, 1.0 mm for STATIC, AVG-RESP and PCA, respectively. The heterogeneous patient group demonstrated relatively large dosimetric differences between the STATIC and AVG-RESP, and the PCA reconstructed dose maps, with hotspots up to 40% of the D99 and an underdosed GTV in three out of the five patients. This shows the potential importance of including intra-fraction motion variations in dose calculations.

  19. Equivalence in Dose Fall-Off for Isocentric and Nonisocentric Intracranial Treatment Modalities and Its Impact on Dose Fractionation Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Lijun, E-mail: lijunma@radonc.ucsf.ed; Sahgal, Arjun; Descovich, Martina

    2010-03-01

    Purpose: To investigate whether dose fall-off characteristics would be significantly different among intracranial radiosurgery modalities and the influence of these characteristics on fractionation schemes in terms of normal tissue sparing. Methods and Materials: An analytic model was developed to measure dose fall-off characteristics near the target independent of treatment modalities. Variations in the peripheral dose fall-off characteristics were then examined and compared for intracranial tumors treated with Gamma Knife, Cyberknife, or Novalis LINAC-based system. Equivalent uniform biologic effective dose (EUBED) for the normal brain tissue was calculated. Functional dependence of the normal brain EUBED on varying numbers of fractions (1more » to 30) was studied for the three modalities. Results: The derived model fitted remarkably well for all the cases (R{sup 2} > 0.99). No statistically significant differences in the dose fall-off relationships were found between the three modalities. Based on the extent of variations in the dose fall-off curves, normal brain EUBED was found to decrease with increasing number of fractions for the targets, with alpha/beta ranging from 10 to 20. This decrease was most pronounced for hypofractionated treatments with fewer than 10 fractions. Additionally, EUBED was found to increase slightly with increasing number of fractions for targets with alpha/beta ranging from 2 to 5. Conclusion: Nearly identical dose fall-off characteristics were found for the Gamma Knife, Cyberknife, and Novalis systems. Based on EUBED calculations, normal brain sparing was found to favor hypofractionated treatments for fast-growing tumors with alpha/beta ranging from 10 to 20 and single fraction treatment for abnormal tissues with low alpha/beta values such as alpha/beta = 2.« less

  20. Venom from the snake Bothrops asper Garman. Purification and characterization of three phospholipases A2

    PubMed Central

    Anagón, Alejandro C.; Molinar, Ricardo R.; Possani, Lourival D.; Fletcher, Paul L.; Cronan, John E.; Julia, Jordi Z.

    1980-01-01

    The water-soluble venom of Bothrops asper Garman (San Juan Evangelista, Veracruz, México) showed 15 polypeptide bands on polyacrylamide-gel electrophoresis. This material exhibited phospholipase, hyaluronidase, N-benzoyl-l-arginine ethyl hydrolase, N-benzoyl-l-tyrosine ethyl hydrolase and phosphodiesterase activity, but no alkaline phosphatase or acid phosphatase activity. Fractionation on Sephadex G-75 afforded seven protein fractions, which were apparently less toxic than the whole venom (LD50=4.3μg/g mouse wt.). Subsequent separation of the phospholipase-positive fraction (II) on DEAE-cellulose with potassium phosphate buffers (pH7.55) gave several fractions, two being phospholipase-positive (II.6 and II.8). These fractions were further purified on DEAE-cellulose columns with potassium phosphate buffers (pH8.6). Fraction II.8.4 was rechromatographed in the same DEAE-cellulose column, giving a pure protein designated phospholipase 1. The fraction II.6.3 was further separated by gel disc electrophoresis yielding two more pure proteins designated phospholipase 2 and phospholipase 3. Analysis of phospholipids hydrolysed by these enzymes have shown that all three phospholipases belong to type A2. Amino acid analysis has shown that phospholipase A2 (type 1) has 97 residues with a calculated mol.wt. of 10978±11. Phospholipase A2 (type 2) has 96 residues with a mol.wt. of 10959±11. Phospholipase A2 (type 3) has 266 residues with 16 half-cystine residues and a calculated mol.wt of 29042±31. Automated Edman degradation showed the N-terminal sequence to be: Asx-Leu-Trp-Glx-Phe-Gly-Glx-Met-Met-Ser-Asx-Val- Met-Arg-Lys-Asx-Val-Val-Phe-Lys-Tyr-Leu- for phospholipase A2 (type 2). ImagesFig. 1. PMID:7387631

  1. Methods for estimating properties of hydrocarbons comprising asphaltenes based on their solubility

    DOEpatents

    Schabron, John F.; Rovani, Jr., Joseph F.

    2016-10-04

    Disclosed herein is a method of estimating a property of a hydrocarbon comprising the steps of: preparing a liquid sample of a hydrocarbon, the hydrocarbon having asphaltene fractions therein; precipitating at least some of the asphaltenes of a hydrocarbon from the liquid sample with one or more precipitants in a chromatographic column; dissolving at least two of the different asphaltene fractions from the precipitated asphaltenes during a successive dissolution protocol; eluting the at least two different dissolved asphaltene fractions from the chromatographic column; monitoring the amount of the fractions eluted from the chromatographic column; using detected signals to calculate a percentage of a peak area for a first of the asphaltene fractions and a peak area for a second of the asphaltene fractions relative to the total peak areas, to determine a parameter that relates to the property of the hydrocarbon; and estimating the property of the hydrocarbon.

  2. Effect of radiation protraction on BED in the case of large fraction dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuperman, V. Y.

    2013-08-15

    Purpose: To investigate the effect of radiation protraction on biologically effective dose (BED) in the case when dose per fraction is significantly greater than the standard dose of 2 Gy.Methods: By using the modified linear-quadratic model with monoexponential repair, the authors investigate the effect of long treatment times combined with dose escalation.Results: The dependences of the protraction factor and the corresponding BED on fraction time were determined for different doses per fraction typical for stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). In the calculations, the authors consider changes in the BED to the normal tissue under the conditionmore » of fixed BED to the target.Conclusion: The obtained results demonstrate that simultaneous increase in fraction time and dose per fraction can be beneficial for SRS and SBRT because of the related decrease in BED to normal structures while BED to the target is fixed.« less

  3. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  4. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Zhang, Yaning; Jin, Guangri; Li, Bingxi; Kim, Yong-Song; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    A three-phase model capable of predicting the heat transfer and moisture migration for soil freezing process was developed based on the Shen-Chen model and the mechanisms of heat and mass transfer in unsaturated soil freezing. The pre-melted film was taken into consideration, and the relationship between film thickness and soil temperature was used to calculate the liquid water fraction in both frozen zone and freezing fringe. The force that causes the moisture migration was calculated by the sum of several interactive forces and the suction in the pre-melted film was regarded as an interactive force between ice and water. Two kinds of resistance were regarded as a kind of body force related to the water films between the ice grains and soil grains, and a block force instead of gravity was introduced to keep balance with gravity before soil freezing. Lattice Boltzmann method was used in the simulation, and the input variables for the simulation included the size of computational domain, obstacle fraction, liquid water fraction, air fraction and soil porosity. The model is capable of predicting the water content distribution along soil depth and variations in water content and temperature during soil freezing process.

  5. Boundary Layer Aerosol Composition over Sierra Nevada Mountains using 9.11- and 10.59-micron CW Lidars and Modeled Backscatter from Size Distribution Data

    NASA Technical Reports Server (NTRS)

    Cutten, D. R.; Jarzembski, M. A.; Srivastava, V.; Pueschel, R. F.; Howard, S. D.; McCaul, E. W., Jr.

    2003-01-01

    An inversion technique has been developed to determine volume fractions of an atmospheric aerosol composed primarily of ammonium sulfate and ammonium nitrate and water combined with fixed concentration of elemental and organic carbon. It is based on measured aerosol backscatter obtained with 9.11 - and 10.59-micron wavelength continuous wave CO2 lidars and modeled backscatter from aerosol size distribution data. The technique is demonstrated during a flight of the NASA DC-8 aircraft over the Sierra Nevada Mountain Range, California on 19 September, 1995. Volume fraction of each component and effective complex refractive index of the composite particle were determined assuming an internally mixed composite aerosol model. The volume fractions were also used to re-compute aerosol backscatter, providing good agreement with the lidar-measured data. The robustness of the technique for determining volume fractions was extended with a comparison of calculated 2.1,-micron backscatter from size distribution data with the measured lidar data converted to 2.1,-micron backscatter using an earlier derived algorithm, verifying the algorithm as well as the backscatter calculations.

  6. Skin dose for head and neck cancer patients treated with intensity-modulated radiation therapy(IMRT)

    NASA Astrophysics Data System (ADS)

    Fu, Hsiao-Ju; Li, Chi-Wei; Tsai, Wei-Ta; Chang, Chih-Chia; Tsang, Yuk-Wah

    2017-11-01

    The reliability of thermoluminescent dosimeters (ultrathin TLD) and ISP Gafchromic EBT2 film to measure the surface dose in phantom and the skin dose in head-and-neck patients treated with intensity-modulated radiation therapy technique(IMRT) is the research focus. Seven-field treatment plans with prescribed dose of 180 cGy were performed on Eclipse treatment planning system which utilized pencil beam calculation algorithm(PBC). In calibration tests, the variance coefficient of the ultrathin TLDs were within 3%. The points on the calibration curve of the Gafchromic film was within 1% variation. Five measurements were taken on phantom using ultrathin TLD and EBT2 film respectively. The measured mean surface doses between ultrathin TLD or EBT2 film were within 5% deviation. Skin doses of 6 patients were measured for initial 5 fractions and the mean dose per-fraction was calculated. If the extrapolated doses for 30 fractions were below 4000 cGy, the skin reaction grading observed according to Radiation Therapy Oncology Group (RTOG) was either grade 1 or grade 2. If surface dose exceeded 5000 cGy in 32 fractions, then grade 3 skin reactions were observed.

  7. Numerical simulation of convective heat transfer of nonhomogeneous nanofluid using Buongiorno model

    NASA Astrophysics Data System (ADS)

    Sayyar, Ramin Onsor; Saghafian, Mohsen

    2017-08-01

    The aim is to study the assessment of the flow and convective heat transfer of laminar developing flow of Al2O3-water nanofluid inside a vertical tube. A finite volume method procedure on a structured grid was used to solve the governing partial differential equations. The adopted model (Buongiorno model) assumes that the nanofluid is a mixture of a base fluid and nanoparticles, with the relative motion caused by Brownian motion and thermophoretic diffusion. The results showed the distribution of nanoparticles remained almost uniform except in a region near the hot wall where nanoparticles volume fraction were reduced as a result of thermophoresis. The simulation results also indicated there is an optimal volume fraction about 1-2% of the nanoparticles at each Reynolds number for which the maximum performance evaluation criteria can be obtained. The difference between Nusselt number and nondimensional pressure drop calculated based on two phase model and the one calculated based on single phase model was less than 5% at all nanoparticles volume fractions and can be neglected. In natural convection, for 4% of nanoparticles volume fraction, in Gr = 10 more than 15% enhancement of Nusselt number was achieved but in Gr = 300 it was less than 1%.

  8. Identification of Hierarchies of Student Learning about Percentages Using Rasch Analysis

    ERIC Educational Resources Information Center

    Burfitt, Joan

    2013-01-01

    A review of the research literature indicated that there were probable orders in which students develop understandings and skills for calculating with percentages. Such calculations might include using models to represent percentages, knowing fraction equivalents, selection of strategies to solve problems and determination of percentage change. To…

  9. Discovery

    ERIC Educational Resources Information Center

    de Mestre, Neville

    2010-01-01

    All common fractions can be written in decimal form. In this Discovery article, the author suggests that teachers ask their students to calculate the decimals by actually doing the divisions themselves, and later on they can use a calculator to check their answers. This article presents a lesson based on the research of Bolt (1982).

  10. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  11. Biochemometrics for Natural Products Research: Comparison of Data Analysis Approaches and Application to Identification of Bioactive Compounds.

    PubMed

    Kellogg, Joshua J; Todd, Daniel A; Egan, Joseph M; Raja, Huzefa A; Oberlies, Nicholas H; Kvalheim, Olav M; Cech, Nadja B

    2016-02-26

    A central challenge of natural products research is assigning bioactive compounds from complex mixtures. The gold standard approach to address this challenge, bioassay-guided fractionation, is often biased toward abundant, rather than bioactive, mixture components. This study evaluated the combination of bioassay-guided fractionation with untargeted metabolite profiling to improve active component identification early in the fractionation process. Key to this methodology was statistical modeling of the integrated biological and chemical data sets (biochemometric analysis). Three data analysis approaches for biochemometric analysis were compared, namely, partial least-squares loading vectors, S-plots, and the selectivity ratio. Extracts from the endophytic fungi Alternaria sp. and Pyrenochaeta sp. with antimicrobial activity against Staphylococcus aureus served as test cases. Biochemometric analysis incorporating the selectivity ratio performed best in identifying bioactive ions from these extracts early in the fractionation process, yielding altersetin (3, MIC 0.23 μg/mL) and macrosphelide A (4, MIC 75 μg/mL) as antibacterial constituents from Alternaria sp. and Pyrenochaeta sp., respectively. This study demonstrates the potential of biochemometrics coupled with bioassay-guided fractionation to identify bioactive mixture components. A benefit of this approach is the ability to integrate multiple stages of fractionation and bioassay data into a single analysis.

  12. A sup-score test for the cure fraction in mixture models for long-term survivors.

    PubMed

    Hsu, Wei-Wen; Todem, David; Kim, KyungMann

    2016-12-01

    The evaluation of cure fractions in oncology research under the well known cure rate model has attracted considerable attention in the literature, but most of the existing testing procedures have relied on restrictive assumptions. A common assumption has been to restrict the cure fraction to a constant under alternatives to homogeneity, thereby neglecting any information from covariates. This article extends the literature by developing a score-based statistic that incorporates covariate information to detect cure fractions, with the existing testing procedure serving as a special case. A complication of this extension, however, is that the implied hypotheses are not typical and standard regularity conditions to conduct the test may not even hold. Using empirical processes arguments, we construct a sup-score test statistic for cure fractions and establish its limiting null distribution as a functional of mixtures of chi-square processes. In practice, we suggest a simple resampling procedure to approximate this limiting distribution. Our simulation results show that the proposed test can greatly improve efficiency over tests that neglect the heterogeneity of the cure fraction under the alternative. The practical utility of the methodology is illustrated using ovarian cancer survival data with long-term follow-up from the surveillance, epidemiology, and end results registry. © 2016, The International Biometric Society.

  13. A finite element formulation preserving symmetric and banded diffusion stiffness matrix characteristics for fractional differential equations

    NASA Astrophysics Data System (ADS)

    Lin, Zeng; Wang, Dongdong

    2017-10-01

    Due to the nonlocal property of the fractional derivative, the finite element analysis of fractional diffusion equation often leads to a dense and non-symmetric stiffness matrix, in contrast to the conventional finite element formulation with a particularly desirable symmetric and banded stiffness matrix structure for the typical diffusion equation. This work first proposes a finite element formulation that preserves the symmetry and banded stiffness matrix characteristics for the fractional diffusion equation. The key point of the proposed formulation is the symmetric weak form construction through introducing a fractional weight function. It turns out that the stiffness part of the present formulation is identical to its counterpart of the finite element method for the conventional diffusion equation and thus the stiffness matrix formulation becomes trivial. Meanwhile, the fractional derivative effect in the discrete formulation is completely transferred to the force vector, which is obviously much easier and efficient to compute than the dense fractional derivative stiffness matrix. Subsequently, it is further shown that for the general fractional advection-diffusion-reaction equation, the symmetric and banded structure can also be maintained for the diffusion stiffness matrix, although the total stiffness matrix is not symmetric in this case. More importantly, it is demonstrated that under certain conditions this symmetric diffusion stiffness matrix formulation is capable of producing very favorable numerical solutions in comparison with the conventional non-symmetric diffusion stiffness matrix finite element formulation. The effectiveness of the proposed methodology is illustrated through a series of numerical examples.

  14. Bioconversion of hybrid poplar to ethanol and co-products using an organosolv fractionation process: optimization of process yields.

    PubMed

    Pan, Xuejun; Gilkes, Neil; Kadla, John; Pye, Kendall; Saka, Shiro; Gregg, David; Ehara, Katsunobu; Xie, Dan; Lam, Dexter; Saddler, Jack

    2006-08-05

    An organosolv process involving extraction with hot aqueous ethanol has been evaluated for bioconversion of hybrid poplar to ethanol. The process resulted in fractionation of poplar chips into a cellulose-rich solids fraction, an ethanol organosolv lignin (EOL) fraction, and a water-soluble fraction containing hemicellulosic sugars, sugar breakdown products, degraded lignin, and other components. The influence of four independent process variables (temperature, time, catalyst dose, and ethanol concentration) on product yields was analyzed over a broad range using a small composite design and response surface methodology. Center point conditions for the composite design (180 degrees C, 60 min, 1.25% H(2)SO(4), and 60% ethanol), yielded a solids fraction containing approximately 88% of the cellulose present in the untreated poplar. Approximately 82% of the total cellulose in the untreated poplar was recovered as monomeric glucose after hydrolysis of the solids fraction for 24 h using a low enzyme loading (20 filter paper units of cellulase/g cellulose); approximately 85% was recovered after 48 h hydrolysis. Total recovery of xylose (soluble and insoluble) was equivalent to approximately 72% of the xylose present in untreated wood. Approximately 74% of the lignin in untreated wood was recovered as EOL. Other cooking conditions resulted in either similar or inferior product yields although the distribution of components between the various fractions differed markedly. Data analysis generated regression models that describe process responses for any combination of the four variables. (c) 2006 Wiley Periodicals, Inc.

  15. Complexity metric based on fraction of penumbra dose - initial study

    NASA Astrophysics Data System (ADS)

    Bäck, A.; Nordström, F.; Gustafsson, M.; Götstedt, J.; Karlsson Hauer, A.

    2017-05-01

    Volumetric modulated arc therapy improve radiotherapy outcome for many patients compared to conventional three dimensional conformal radiotherapy but require a more extensive, most often measurement based, quality assurance. Multi leaf collimator (MLC) aperture-based complexity metrics have been suggested to be used to distinguish complex treatment plans unsuitable for treatment without time consuming measurements. This study introduce a spatially resolved complexity score that correlate to the fraction of penumbra dose and will give information on the spatial distribution and the clinical relevance of the calculated complexity. The complexity metric is described and an initial study on the correlation between the complexity score and the difference between measured and calculated dose for 30 MLC openings is presented. The result of an analysis of the complexity scores were found to correlate to differences between measurements and calculations with a Pearson’s r-value of 0.97.

  16. A DIRECT MEASUREMENT OF THE BARYONIC MASS FUNCTION OF GALAXIES AND IMPLICATIONS FOR THE GALACTIC BARYON FRACTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papastergis, Emmanouil; Huang, Shan; Giovanelli, Riccardo

    We use both an H I-selected and an optically selected galaxy sample to directly measure the abundance of galaxies as a function of their 'baryonic' mass (stars + atomic gas). Stellar masses are calculated based on optical data from the Sloan Digital Sky Survey and atomic gas masses are calculated using atomic hydrogen (H I) emission line data from the Arecibo Legacy Fast ALFA survey. By using the technique of abundance matching, we combine the measured baryonic function of galaxies with the dark matter halo mass function in a {Lambda}CDM universe, in order to determine the galactic baryon fraction asmore » a function of host halo mass. We find that the baryon fraction of low-mass halos is much smaller than the cosmic value, even when atomic gas is taken into account. We find that the galactic baryon deficit increases monotonically with decreasing halo mass, in contrast with previous studies which suggested an approximately constant baryon fraction at the low-mass end. We argue that the observed baryon fractions of low-mass halos cannot be explained by reionization heating alone, and that additional feedback mechanisms (e.g., supernova blowout) must be invoked. However, the outflow rates needed to reproduce our result are not easily accommodated in the standard picture of galaxy formation in a {Lambda}CDM universe.« less

  17. Generation of dark hollow beams by using a fractional radial Hilbert transform system

    NASA Astrophysics Data System (ADS)

    Xie, Qiansen; Zhao, Daomu

    2007-07-01

    The radial Hilbert transform has been extend to the fractional field, which could be called the fractional radial Hilbert transform (FRHT). Using edge-enhancement characteristics of this transform, we convert a Gaussian light beam into a variety of dark hollow beams (DHBs). Based on the fact that a hard-edged aperture can be expanded approximately as a finite sum of complex Gaussian functions, the analytical expression of a Gaussian beam passing through a FRHT system has been derived. As a numerical example, the properties of the DHBs with different fractional orders are illustrated graphically. The calculation results obtained by use of the analytical method and the integral method are also compared.

  18. A semi-analytical method for the computation of the Lyapunov exponents of fractional-order systems

    NASA Astrophysics Data System (ADS)

    Caponetto, Riccardo; Fazzino, Stefano

    2013-01-01

    Fractional-order differential equations are interesting for their applications in the construction of mathematical models in finance, materials science or diffusion. In this paper, an application of a well known transformation technique, Differential Transform Method (DTM), to the area of fractional differential equation is employed for calculating Lyapunov exponents of fractional order systems. It is known that the Lyapunov exponents, first introduced by Oseledec, play a crucial role in characterizing the behaviour of dynamical systems. They can be used to analyze the sensitive dependence on initial conditions and the presence of chaotic attractors. The results reveal that the proposed method is very effective and simple and leads to accurate, approximately convergent solutions.

  19. Analytical solution to the fractional polytropic gas spheres

    NASA Astrophysics Data System (ADS)

    Nouh, Mohamed I.; Abdel-Salam, Emad A.-B.

    2018-04-01

    The Lane-Emden equation can be used to model stellar interiors, star clusters and many configurations in astrophysics. Unfortunately, there is an exact solution only for the polytropic indices n = 0, 1 and 5. In the present paper, a series solution for the fractional Lane-Emden equation is presented. The solution is performed in the frame of modified Rienmann Liouville derivatives. The obtained results recover the well-known series solutions when α =1. The fractional model of n = 3 is calculated and the mass-radius relation, density ratio, pressure ratio and temperature ratio are investigated. The fractional star appears much different than the integer star, as it is denser, more stressed and hotter than the integer star.

  20. Individual Rocks Segmentation in Terrestrial Laser Scanning Point Cloud Using Iterative Dbscan Algorithm

    NASA Astrophysics Data System (ADS)

    Walicka, A.; Jóźków, G.; Borkowski, A.

    2018-05-01

    The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.

  1. Greenhouse gas emissions from vegetation fires in Southern Africa.

    PubMed

    Scholes, R J

    1995-01-01

    Methane (CH4), carbon monoxide (CO), nitrogen oxides (NOx), volatile organic carbon, and aerosols emitted as a result of the deliberate or accidental burning of natural vegetation constitute a large component of the greenhouse gas emissions of many African countries, but the data needed for calculating these emissions by the IPCC methodology is sparse and subject to estimation errors. An improved procedure for estimating emissions from fires in southern Africa has been developed. The proposed procedure involves reclassifying existing vegetation maps into one of eleven broad, functional vegetation classes. Fuel loads are calculated within each 0.5 × 0.5° cell based on empirical relationships to climate data for each class. The fractional area of each class that burns is estimated by using daily low-resolution satellite fire detection, which is calibrated against a subsample of pre- and post-fire high-resolution satellite images. The emission factors that relate the quantity of gas released to the mass of fuel burned are based on recent field campaigns in Africa and are related to combustion efficiency, which is in turn related to the fuel mix. The emissions are summed over the 1989 fire season for Africa south of the equator. The estimated emissions from vegetation burning in the subcontinent are 0.5 Tg CH4, 14.9 Tg CO, 1.05 Tg NOx, and 1.08 Tg of particles smaller than 2.5µm. The 324 Tg CO2 emitted is expected to be reabsorbed in subsequent years. These estimates are smaller than previous estimates.

  2. Direct Simulation of Extinction in a Slab of Spherical Particles

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Mishchenko, Michael I.

    2013-01-01

    The exact multiple sphere superposition method is used to calculate the coherent and incoherent contributions to the ensemble-averaged electric field amplitude and Poynting vector in systems of randomly positioned nonabsorbing spherical particles. The target systems consist of cylindrical volumes, with radius several times larger than length, containing spheres with positional configurations generated by a Monte Carlo sampling method. Spatially dependent values for coherent electric field amplitude, coherent energy flux, and diffuse energy flux, are calculated by averaging of exact local field and flux values over multiple configurations and over spatially independent directions for fixed target geometry, sphere properties, and sphere volume fraction. Our results reveal exponential attenuation of the coherent field and the coherent energy flux inside the particulate layer and thereby further corroborate the general methodology of the microphysical radiative transfer theory. An effective medium model based on plane wave transmission and reflection by a plane layer is used to model the dependence of the coherent electric field on particle packing density. The effective attenuation coefficient of the random medium, computed from the direct simulations, is found to agree closely with effective medium theories and with measurements. In addition, the simulation results reveal the presence of a counter-propagating component to the coherent field, which arises due to the internal reflection of the main coherent field component by the target boundary. The characteristics of the diffuse flux are compared to, and found to be consistent with, a model based on the diffusion approximation of the radiative transfer theory.

  3. Tuberculosis DALY-Gap: Spatial and Quantitative Comparison of Disease Burden Across Urban Slum and Non-slum Census Tracts.

    PubMed

    Marlow, Mariel A; Maciel, Ethel Leonor Noia; Sales, Carolina Maia Martins; Gomes, Teresa; Snyder, Robert E; Daumas, Regina Paiva; Riley, Lee W

    2015-08-01

    To quantitatively assess disease burden due to tuberculosis between populations residing in and outside of urban informal settlements in Rio de Janeiro, Brazil, we compared disability-adjusted life years (DALYs), or "DALY-gap." Using the 2010 Brazilian census definition of informal settlements as aglomerados subnormais (AGSN), we allocated tuberculosis (TB) DALYs to AGSN vs non-AGSN census tracts based on geocoded addresses of TB cases reported to the Brazilian Information System for Notifiable Diseases in 2005 and 2010. DALYs were calculated based on the 2010 Global Burden of Disease methodology. DALY-gap was calculated as the difference between age-adjusted DALYs/100,000 population between AGSN and non-AGSN. Total TB DALY in Rio in 2010 was 16,731 (266 DALYs/100,000). DALYs were higher in AGSN census tracts (306 vs 236 DALYs/100,000), yielding a DALY-gap of 70 DALYs/100,000. Attributable DALY fraction for living in an AGSN was 25.4%. DALY-gap was highest for males 40-59 years of age (501 DALYs/100,000) and in census tracts with <60% electricity (12,327 DALYs/100,000). DALY-gap comparison revealed spatial and quantitative differences in TB burden between slum vs non-slum census tracts that were not apparent using traditional measures of incidence and mortality. This metric could be applied to compare TB burden or burden for other diseases in mega-cities with large informal settlements for more targeted resource allocation and evaluation of intervention programs.

  4. A novel continuous fractional sliding mode control

    NASA Astrophysics Data System (ADS)

    Muñoz-Vázquez, A. J.; Parra-Vega, V.; Sánchez-Orta, A.

    2017-10-01

    A new fractional-order controller is proposed, whose novelty is twofold: (i) it withstands a class of continuous but not necessarily differentiable disturbances as well as uncertainties and unmodelled dynamics, and (ii) based on a principle of dynamic memory resetting of the differintegral operator, it is enforced an invariant sliding mode in finite time. Both (i) and (ii) account for exponential convergence of tracking errors, where such principle is instrumental to demonstrate the closed-loop stability, robustness and a sustained sliding motion, as well as that high frequencies are filtered out from the control signal. The proposed methodology is illustrated with a representative simulation study.

  5. Numerical investigation of gapped edge states in fractional quantum Hall-superconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Repellin, Cécile; Cook, Ashley M.; Neupert, Titus; Regnault, Nicolas

    2018-03-01

    Fractional quantum Hall-superconductor heterostructures may provide a platform towards non-abelian topological modes beyond Majoranas. However their quantitative theoretical study remains extremely challenging. We propose and implement a numerical setup for studying edge states of fractional quantum Hall droplets with a superconducting instability. The fully gapped edges carry a topological degree of freedom that can encode quantum information protected against local perturbations. We simulate such a system numerically using exact diagonalization by restricting the calculation to the quasihole-subspace of a (time-reversal symmetric) bilayer fractional quantum Hall system of Laughlin ν = 1/3 states. We show that the edge ground states are permuted by spin-dependent flux insertion and demonstrate their fractional 6π Josephson effect, evidencing their topological nature and the Cooper pairing of fractionalized quasiparticles. The versatility and efficiency of our setup make it a well suited method to tackle wider questions of edge phases and phase transitions in fractional quantum Hall systems.

  6. Application of Fuzzy Logic to Matrix FMECA

    NASA Astrophysics Data System (ADS)

    Shankar, N. Ravi; Prabhu, B. S.

    2001-04-01

    A methodology combining the benefits of Fuzzy Logic and Matrix FMEA is presented in this paper. The presented methodology extends the risk prioritization beyond the conventional Risk Priority Number (RPN) method. Fuzzy logic is used to calculate the criticality rank. Also the matrix approach is improved further to develop a pictorial representation retaining all relevant qualitative and quantitative information of several FMEA elements relationships. The methodology presented is demonstrated by application to an illustrative example.

  7. MCNPX CALCULATIONS OF SPECIFIC ABSORBED FRACTIONS IN SOME ORGANS OF THE HUMAN BODY DUE TO APPLICATION OF 133Xe, 99mTc and 81mKr RADIONUCLIDES.

    PubMed

    Jovanovic, Z; Krstic, D; Nikezic, D; Ros, J M Gomez; Ferrari, P

    2018-03-01

    Monte Carlo simulations were performed to evaluate treatment doses with wide spread used radionuclides 133Xe, 99mTc and 81mKr. These different radionuclides are used in perfusion or ventilation examinations in nuclear medicine and as indicators for cardiovascular and pulmonary diseases. The objective of this work was to estimate the specific absorbed fractions in surrounding organs and tissues, when these radionuclides are incorporated in the lungs. For this purpose a voxel thorax model has been developed and compared with the ORNL phantom. All calculations and simulations were performed by means of the MCNP5/X code.

  8. Redefining relative biological effectiveness in the context of the EQDX formalism: implications for alpha-particle emitter therapy.

    PubMed

    Hobbs, Robert F; Howell, Roger W; Song, Hong; Baechler, Sébastien; Sgouros, George

    2014-01-01

    Alpha-particle radiopharmaceutical therapy (αRPT) is currently enjoying increasing attention as a viable alternative to chemotherapy for targeting of disseminated micrometastatic disease. In theory, αRPT can be personalized through pre-therapeutic imaging and dosimetry. However, in practice, given the particularities of α-particle emissions, a dosimetric methodology that accurately predicts the thresholds for organ toxicity has not been reported. This is in part due to the fact that the biological effects caused by α-particle radiation differ markedly from the effects caused by traditional external beam (photon or electron) radiation or β-particle emitting radiopharmaceuticals. The concept of relative biological effectiveness (RBE) is used to quantify the ratio of absorbed doses required to achieve a given biological response with alpha particles versus a reference radiation (typically a beta emitter or external beam radiation). However, as conventionally defined, the RBE varies as a function of absorbed dose and therefore a single RBE value is limited in its utility because it cannot be used to predict response over a wide range of absorbed doses. Therefore, efforts are underway to standardize bioeffect modeling for different fractionation schemes and dose rates for both nuclear medicine and external beam radiotherapy. Given the preponderant use of external beams of radiation compared to nuclear medicine in cancer therapy, the more clinically relevant quantity, the 2 Gy equieffective dose, EQD2(α/β), has recently been proposed by the ICRU. In concert with EQD2(α/β), we introduce a new, redefined RBE quantity, named RBE2(α/β), as the ratio of the two linear coefficients that characterize the α particle absorbed dose-response curve and the low-LET megavoltage photon 2 Gy fraction equieffective dose-response curve. The theoretical framework for the proposed new formalism is presented along with its application to experimental data obtained from irradiation of a breast cancer cell line. Radiobiological parameters are obtained using the linear quadratic model to fit cell survival data for MDA-MB-231 human breast cancer cells that were irradiated with either α particles or a single fraction of low-LET (137)Cs γ rays. From these, the linear coefficient for both the biologically effective dose (BED) and the EQD2(α/β) response lines were derived for fractionated irradiation. The standard RBE calculation, using the traditional single fraction reference radiation, gave RBE values that ranged from 2.4 for a surviving fraction of 0.82-6.0 for a surviving fraction of 0.02, while the dose-independent RBE2(4.6) value was 4.5 for all surviving fraction values. Furthermore, bioeffect modeling with RBE2(α/β) and EQD2(α/β) demonstrated the capacity to predict the surviving fraction of cells irradiated with acute and fractionated low-LET radiation, α particles and chronic exponentially decreasing dose rates of low-LET radiation. RBE2(α/β) is independent of absorbed dose for α-particle emitters and it provides a more logical framework for data reporting and conversion to equieffective dose than the conventional dose-dependent definition of RBE. Moreover, it provides a much needed foundation for the ongoing development of an α-particle dosimetry paradigm and will facilitate the use of tolerance dose data available from external beam radiation therapy, thereby helping to develop αRPT as a single modality as well as for combination therapies.

  9. Redefining Relative Biological Effectiveness in the Context of the EQDX Formalism: Implications for Alpha-Particle Emitter Therapy.

    PubMed

    Hobbs, Robert F; Howell, Roger W; Song, Hong; Baechler, Sébastien; Sgouros, George

    2013-12-30

    Alpha-particle radiopharmaceutical therapy (αRPT) is currently enjoying increasing attention as a viable alternative to chemotherapy for targeting of disseminated micrometastatic disease. In theory, αRPT can be personalized through pre-therapeutic imaging and dosimetry. However, in practice, given the particularities of α-particle emissions, a dosimetric methodology that accurately predicts the thresholds for organ toxicity has not been reported. This is in part due to the fact that the biological effects caused by α-particle radiation differ markedly from the effects caused by traditional external beam (photon or electron) radiation or β-particle emitting radiopharmaceuticals. The concept of relative biological effectiveness (RBE) is used to quantify the ratio of absorbed doses required to achieve a given biological response with alpha particles versus a reference radiation (typically a beta emitter or external beam radiation). However, as conventionally defined, the RBE varies as a function of absorbed dose and therefore a single RBE value is limited in its utility because it cannot be used to predict response over a wide range of absorbed doses. Therefore, efforts are underway to standardize bioeffect modeling for different fractionation schemes and dose rates for both nuclear medicine and external beam radiotherapy. Given the preponderant use of external beams of radiation compared to nuclear medicine in cancer therapy, the more clinically relevant quantity, the 2 Gy equieffective dose, EQD2(α/β), has recently been proposed by the ICRU. In concert with EQD2(α/β), we introduce a new, redefined RBE quantity, named RBE2(α/β), as the ratio of the two linear coefficients that characterize the α particle absorbed dose-response curve and the low-LET megavoltage photon 2 Gy fraction equieffective dose-response curve. The theoretical framework for the proposed new formalism is presented along with its application to experimental data obtained from irradiation of a breast cancer cell line. Radiobiological parameters are obtained using the linear quadratic model to fit cell survival data for MDA-MB-231 human breast cancer cells that were irradiated with either α particles or a single fraction of low-LET 137 Cs γ rays. From these, the linear coefficient for both the biologically effective dose (BED) and the EQD2(α/β) response lines were derived for fractionated irradiation. The standard RBE calculation, using the traditional single fraction reference radiation, gave RBE values that ranged from 2.4 for a surviving fraction of 0.82-6.0 for a surviving fraction of 0.02, while the dose-independent RBE2(4.6) value was 4.5 for all surviving fraction values. Furthermore, bioeffect modeling with RBE2(α/β) and EQD2(α/β) demonstrated the capacity to predict the surviving fraction of cells irradiated with acute and fractionated low-LET radiation, α particles and chronic exponentially decreasing dose rates of low-LET radiation. RBE2(α/β) is independent of absorbed dose for α-particle emitters and it provides a more logical framework for data reporting and conversion to equieffective dose than the conventional dose-dependent definition of RBE. Moreover, it provides a much needed foundation for the ongoing development of an α-particle dosimetry paradigm and will facilitate the use of tolerance dose data available from external beam radiation therapy, thereby helping to develop αRPT as a single modality as well as for combination therapies.

  10. Risk-based high-throughput chemical screening and prioritization using exposure models and in vitro bioactivity assays

    DOE PAGES

    Shin, Hyeong -Moo; Ernstoff, Alexi; Arnot, Jon A.; ...

    2015-05-01

    We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate dailymore » intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.« less

  11. 42 CFR 416.171 - Determination of payment rates for ASC services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Determination of payment rates for ASC services... Determination of payment rates for ASC services. (a) Standard methodology. The standard methodology for determining the national unadjusted payment rate for ASC services is to calculate the product of the...

  12. A TRAJECTORY-CLUSTERING CORRELATION METHODOLOGY FOR EXAMINING THE LONG-RANGE TRANSPORT OF AIR POLLUTANTS. (R825260)

    EPA Science Inventory

    We present a robust methodology for examining the relationship between synoptic-scale atmospheric transport patterns and pollutant concentration levels observed at a site. Our approach entails calculating a large number of back-trajectories from the observational site over a long...

  13. 75 FR 81533 - Antidumping Proceedings: Calculation of the Weighted Average Dumping Margin and Assessment Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... non-dumped comparisons. Several World Trade Organization (``WTO'') dispute settlement reports have... methodologies have been challenged as being inconsistent with the World Trade Organization (``WTO'') General... comparisons in reviews in a manner that parallels the WTO-consistent methodology the Department currently...

  14. Methodologies for extracting kinetic constants for multiphase reacting flow simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S.L.; Lottes, S.A.; Golchert, B.

    1997-03-01

    Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less

  15. 77 FR 53059 - Risk-Based Capital Guidelines: Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ...The Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Board), and Federal Deposit Insurance Corporation (FDIC) are revising their market risk capital rules to better capture positions for which the market risk capital rules are appropriate; reduce procyclicality; enhance the rules' sensitivity to risks that are not adequately captured under current methodologies; and increase transparency through enhanced disclosures. The final rule does not include all of the methodologies adopted by the Basel Committee on Banking Supervision for calculating the standardized specific risk capital requirements for debt and securitization positions due to their reliance on credit ratings, which is impermissible under the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. Instead, the final rule includes alternative methodologies for calculating standardized specific risk capital requirements for debt and securitization positions.

  16. Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, Sean T.

    The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.

  17. Some physical aspects of fluid-fluxed melting

    NASA Astrophysics Data System (ADS)

    Patiño Douce, A.

    2012-04-01

    Fluid-fluxed melting is thought to play a crucial role in the origin of many terrestrial magmas. We can visualize the fundamental physics of the process as follows. An infinitesimal amount of fluid infiltrates dry rock at the temperature of its dry solidus. In order to restore equilibrium the temperature must drop, so that enthalpy is released and immediately reabsorbed as enthalpy of melting. The amount of melt produced must be such that the energy balance and thermodynamic equilibrium conditions are simultaneously satisfied. We wish to understand how an initially dry rock melts in response to progressive fluid infiltration, under both batch and fractional melting constraints. The simplest physical model for this process is a binary system in which one of the components makes up a pure solid phase and the other component a pure fluid phase, and in which a binary melt phase exists over certain temperature range. Melting point depression is calculated under the assumption of ideal mixing. The equations of energy balance and thermodynamic equilibrium are solved simultaneously for temperature and melt fraction, using an iterative procedure that allows addition of fluid in infinitesimal increments. Batch melting and fractional melting are simulated by allowing successive melt increments to remain in the system (batch) or not (fractional). Despite their simplified nature, these calculations reveal some important aspects of fluid-fluxed melting. The model confirms that, if the solubility of the fluid in the melt is sufficiently high, fluid fluxed melting is an efficient mechanism of magma generation. One might expect that the temperature of the infiltrating fluid would have a significant effect on melt productivity, but the results of the calculations show this not to be the case, because a relatively small mass of low molecular weight fluid has a strong effect on the melting point of minerals with much higher molecular weights. The calculations reveal the somewhat surprising result that fluid infiltration produces more melt during fractional melting than during batch melting. This behavior, which is opposite to that of decompression melting of a dry solid, arises because the melting point depression effect of the added fluid is greater during fractional melting than during batch melting, which results in a greater release of enthalpy and, therefore, greater melt production for fractional melting than for batch melting, for the same total amount of fluid added. The difference may be considerable. As an example, suppose that 0.1 mols of H2O infiltrate 1 mol or silicate rock. Depending on the rock composition this may corresponds to ˜ 1 wt% H2O. For a given choice of model parameters (initial temperature, heat capacity and entropy of fusion), about 28% of the rock melts during fractional melting, versus some 23 % during batch melting. Fluid fluxing is a robust process of melt generation, without which magmatism at Earth's convergent plate margins would be impossible.

  18. Composition of the black crusts from the Saint Denis Basilica, France, as revealed by gas chromatography-mass spectrometry.

    PubMed

    Gaviño, Maria; Hermosin, Bernardo; Vergès-Belmin, Véronique; Nowik, Witold; Saiz-Jimenez, Cesareo

    2004-05-01

    The organic fraction of black crusts from Saint Denis Basilica, France, is composed of a complex mixture of aliphatic and aromatic compounds. These compounds were studied by two different analytical approaches: tetramethyl ammonium hydroxide (TMAH) thermochemolysis in combination with gas chromatography-mass spectrometry (GC-MS), and solvent extraction, fractionation by silica column, and identification of the fraction components by GC-MS. The first approach, feasible at the microscale level, is able to supply fairly general information on a wide range of compounds. Using the second approach, we were able to separate the complex mixture of compounds into four fractions, enabling a better identification of the extractable compounds. These compounds belong to different classes: aliphatic hydrocarbons (nalkanes, n-alkenes), aliphatic and aromatic carboxylic acids (n-fatty acids, alpha,omega-dicarboxylic acids, and benzenecarboxylic acids), polycyclic aromatic hydrocarbons (PAH), and molecular biomarkers (isoprenoid hydrocarbons, diterpenoids, and triterpenoids). With each approach, similar classes of compounds were identified, although TMAH thermochemolysis failed to identify compounds present at low concentrations in black crusts. The two proposed methodological approaches are complementary, particularly in the study of polar fractions.

  19. Inter-fraction variations in respiratory motion models

    NASA Astrophysics Data System (ADS)

    McClelland, J. R.; Hughes, S.; Modat, M.; Qureshi, A.; Ahmad, S.; Landau, D. B.; Ourselin, S.; Hawkes, D. J.

    2011-01-01

    Respiratory motion can vary dramatically between the planning stage and the different fractions of radiotherapy treatment. Motion predictions used when constructing the radiotherapy plan may be unsuitable for later fractions of treatment. This paper presents a methodology for constructing patient-specific respiratory motion models and uses these models to evaluate and analyse the inter-fraction variations in the respiratory motion. The internal respiratory motion is determined from the deformable registration of Cine CT data and related to a respiratory surrogate signal derived from 3D skin surface data. Three different models for relating the internal motion to the surrogate signal have been investigated in this work. Data were acquired from six lung cancer patients. Two full datasets were acquired for each patient, one before the course of radiotherapy treatment and one at the end (approximately 6 weeks later). Separate models were built for each dataset. All models could accurately predict the respiratory motion in the same dataset, but had large errors when predicting the motion in the other dataset. Analysis of the inter-fraction variations revealed that most variations were spatially varying base-line shifts, but changes to the anatomy and the motion trajectories were also observed.

  20. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top