Determination of the spatial resolution required for the HEDR dose code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.; Simpson, J.C.
1992-12-01
A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow's milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows' milk from-Feeding Regime 1 as described in scoping calculation 001.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.; Simpson, J.C.
1992-12-01
A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow`s milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows` milk from-Feeding Regime 1 as described in scoping calculation 001.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.; Farris, W.T.; Simpson, J.C.
1992-12-01
A series of scoping calculations has been undertaken to evaluate the absolute and relative contribution of different radionuclides and exposure pathways to doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 005) examined the contributions of numerous parameters to the uncertainty distribution of doses calculated for environmental exposures and accumulation in foods. This study builds on the work initiated in the first scoping study of iodine in cow`s milk and the third scoping study, which added additional pathways. Addressed in this calculation were the contributions to thyroid dose ofmore » infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows` milk from Feeding Regime 1 as described in Calculation 001.« less
Determination of dose distributions and parameter sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.; Farris, W.T.; Simpson, J.C.
1992-12-01
A series of scoping calculations has been undertaken to evaluate the absolute and relative contribution of different radionuclides and exposure pathways to doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 005) examined the contributions of numerous parameters to the uncertainty distribution of doses calculated for environmental exposures and accumulation in foods. This study builds on the work initiated in the first scoping study of iodine in cow's milk and the third scoping study, which added additional pathways. Addressed in this calculation were the contributions to thyroid dose ofmore » infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows' milk from Feeding Regime 1 as described in Calculation 001.« less
Scoping Calculations of Power Sources for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1994-01-01
This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to make scoping calculations for mission analysis.
Music, pandas, and muggers: on the affective psychology of value.
Hsee, Christopher K; Rottenstreich, Yuval
2004-03-01
This research investigated the relationship between the magnitude or scope of a stimulus and its subjective value by contrasting 2 psychological processes that may be used to construct preferences: valuation by feeling and valuation by calculation. The results show that when people rely on feeling, they are sensitive to the presence or absence of a stimulus (i.e., the difference between 0 and some scope) but are largely insensitive to further variations of scope. In contrast, when people rely on calculation, they reveal relatively more constant sensitivity to scope. Thus, value is nearly a step function of scope when feeling predominates and is closer to a linear function when calculation predominates. These findings may allow for a novel interpretation of why most real-world value functions are concave and how the processes responsible for nonlinearity of value may also contribute to nonlinear probability weighting. ((c) 2004 APA, all rights reserved)
The Case for Programmable Calculators in Schools.
ERIC Educational Resources Information Center
Inglis, Norman J.
1981-01-01
Programmable calculators are useful tools in the classroom that are often overlooked. This report gives examples of problems and activities that can be brought within the scope of such calculators. (MP)
CELSS scenario analysis: Breakeven calculations
NASA Technical Reports Server (NTRS)
Mason, R. M.
1980-01-01
A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.
Investment Return Calculations and Senior School Mathematics
ERIC Educational Resources Information Center
Fitzherbert, Richard M.; Pitt, David G. W.
2010-01-01
The methods for calculating returns on investments are taught to undergraduate level business students. In this paper, the authors demonstrate how such calculations are within the scope of senior school students of mathematics. In providing this demonstration the authors hope to give teachers and students alike an illustration of the power and the…
Gupta, Deepak; Wang, Hong
2011-12-01
To calculate the costs per intubation of reusable fiberoptic scopes versus single-use intubation scopes. Open-label retrospective study. University-affiliated hospital. The one-year intubation records of intubations performed with reusable intubation scopes, the one-year maintenance costs of these scopes, and their three-year repair cost records were analyzed. A total of 166 intubations were performed with reusable fiberoptic scopes in 2009. Calculations to assess the costs per intubation based on the documented records at our institution were made. The total cost of an intubation, the repair-to-intubation ratio, and the repair cost per intubation were determined. The total cost of an intubation at our institution in 2009, using reusable scopes, was $119.75 [US dollars (USD)], which included $20.15 (purchasing), $53.48 (repair), $33.16 (maintenance), and $12.96 (labor). The repair-to-intubation ratio was 1:55. Repair costs were $53.48 per intubation and $2,959.44 per instance of repair. The Ambu aScope, a single-use intubation scope, is a new addition to video laryngoscopy. The price should range within 10% of our intubation cost ($120.00 to $132.00 per single-use intubation scope). Copyright © 2011 Elsevier Inc. All rights reserved.
The V-Scope: An "Oscilloscope" for Motion.
ERIC Educational Resources Information Center
Ronen, Miky; Lipman, Aharon
1991-01-01
Proposes the V-Scope as a teaching aid to measure, analyze, and display three-dimensional multibody motion. Describes experiment setup considerations, how measurements are calculated, graphic representation capabilities, and modes of operation of this microcomputer-based system. (MDH)
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Using MCBEND for neutron or gamma-ray deterministic calculations
NASA Astrophysics Data System (ADS)
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Marr
2006-10-25
The purpose of this calculation is to evaluate the thermal performance of the Naval Long and Naval Short spent nuclear fuel (SNF) waste packages (WP) in the repository emplacement drift. The scope of this calculation is limited to the determination of the temperature profiles upon the surfaces of the Naval Long and Short SNF waste package for up to 10,000 years of emplacement. The temperatures on the top of the outside surface of the naval canister are the thermal interfaces for the Naval Nuclear Propulsion Program (NNPP). The results of this calculation are intended to support Licensing Application design activities.
Maths anxiety and medication dosage calculation errors: A scoping review.
Williams, Brett; Davis, Samantha
2016-09-01
A student's accuracy on drug calculation tests may be influenced by maths anxiety, which can impede one's ability to understand and complete mathematic problems. It is important for healthcare students to overcome this barrier when calculating drug dosages in order to avoid administering the incorrect dose to a patient when in the clinical setting. The aim of this study was to examine the effects of maths anxiety on healthcare students' ability to accurately calculate drug dosages by performing a scoping review of the existing literature. This review utilised a six-stage methodology using the following databases; CINAHL, Embase, Medline, Scopus, PsycINFO, Google Scholar, Trip database (http://www.tripdatabase.com/) and Grey Literature report (http://www.greylit.org/). After an initial title/abstract review of relevant papers, and then full text review of the remaining papers, six articles were selected for inclusion in this study. Of the six articles included, there were three experimental studies, two quantitative studies and one mixed method study. All studies addressed nursing students and the presence of maths anxiety. No relevant studies from other disciplines were identified in the existing literature. Three studies took place in the U.S, the remainder in Canada, Australia and United Kingdom. Upon analysis of these studies, four factors including maths anxiety were identified as having an influence on a student's drug dosage calculation abilities. Ultimately, the results from this review suggest more research is required in nursing and other relevant healthcare disciplines regarding the effects of maths anxiety on drug dosage calculations. This additional knowledge will be important to further inform development of strategies to decrease the potentially serious effects of errors in drug dosage calculation to patient safety. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Effective Algorithm Research of Scenario Voxelization Organization and Occlusion Culling
NASA Astrophysics Data System (ADS)
Lai, Guangling; Ding, Lu; Qin, Zhiyuan; Tong, Xiaochong
2016-11-01
Compared with the traditional triangulation approaches, the voxelized point cloud data can reduce the sensitivity of scenario and complexity of calculation. While on the base of the point cloud data, implementation scenario organization could be accomplishment by subtle voxel, but it will add more memory consumption. Therefore, an effective voxel representation method is very necessary. At present, the specific study of voxel visualization algorithm is less. This paper improved the ray tracing algorithm by the characteristics of voxel configuration. Firstly, according to the scope of point cloud data, determined the scope of the pixels on the screen. Then, calculated the light vector came from each pixel. Lastly, used the rules of voxel configuration to calculate all the voxel penetrated through by light. The voxels closest to viewpoint were named visible ones, the rest were all obscured ones. This experimental showed that the method could realize voxelization organization and voxel occlusion culling of implementation scenario efficiently, and increased the render efficiency.
NASA Astrophysics Data System (ADS)
Maris, E.; Froelich, D.
The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., review, negotiation and approval of competitive bids for prescription drug plans and MA-PD plans; the calculation of the national average bid amount; and the determination of enrollee premiums. ...
Recent advances in QM/MM free energy calculations using reference potentials☆
Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.
2015-01-01
Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
Main steam-line break core shroud loading calculations for BWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoop, U.; Feltus, M.A.; Baratta, A.J.
1995-12-31
In July 1994, the U.S. Nuclear regulatory Commission sent out Generic Letter 94-03 to all boiling water reactors in the United States, informing them of intergranular stress corrosion cracking of core shrouds found in 2 reactors. The letter directed all to perform safety analysis of the BWR units. Penn State performed scoping calculations to determine the forces experienced by the core shroud during a main-stream line break transient.
Ionizing radiation calculations and comparisons with LDEF data
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Watts, J. W., Jr.
1992-01-01
In conjunction with the analysis of LDEF ionizing radiation dosimetry data, a calculational program is in progress to aid in data interpretation and to assess the accuracy of current radiation models for future mission applications. To estimate the ionizing radiation environment at the LDEF dosimeter locations, scoping calculations for a simplified (one dimensional) LDEF mass model were made of the primary and secondary radiations produced as a function of shielding thickness due to trapped proton, galactic proton, and atmospheric (neutron and proton cosmic ray albedo) exposures. Preliminary comparisons of predictions with LDEF induced radioactivity and dose measurements were made to test a recently developed model of trapped proton anisotropy.
Introducing MCgrid 2.0: Projecting cross section calculations on grids
NASA Astrophysics Data System (ADS)
Bothmann, Enrico; Hartland, Nathan; Schumann, Steffen
2015-11-01
MCgrid is a software package that provides access to interpolation tools for Monte Carlo event generator codes, allowing for the fast and flexible variation of scales, coupling parameters and PDFs in cutting edge leading- and next-to-leading-order QCD calculations. We present the upgrade to version 2.0 which has a broader scope of interfaced interpolation tools, now providing access to fastNLO, and features an approximated treatment for the projection of MC@NLO-type calculations onto interpolation grids. MCgrid 2.0 also now supports the extended information provided through the HepMC event record used in the recent SHERPA version 2.2.0. The additional information provided therein allows for the support of multi-jet merged QCD calculations in a future update of MCgrid.
RESRAD for Radiological Risk Assessment. Comparison with EPA CERCLA Tools - PRG and DCC Calculators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Cheng, J. -J.; Kamboj, S.
The purpose of this report is two-fold. First, the risk assessment methodology for both RESRAD and the EPA’s tools is reviewed. This includes a review of the EPA’s justification for 2 using a dose-to-risk conversion factor to reduce the dose-based protective ARAR from 15 to 12 mrem/yr. Second, the models and parameters used in RESRAD and the EPA PRG and DCC Calculators are compared in detail, and the results are summarized and discussed. Although there are suites of software tools in the RESRAD family of codes and the EPA Calculators, the scope of this report is limited to the RESRADmore » (onsite) code for soil contamination and the EPA’s PRG and DCC Calculators also for soil contamination.« less
37 CFR 102.21 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... a citizen of the United States or an alien lawfully admitted for permanent residence into the United... not limited to, test calculations of retirement benefits, explanations of health and life insurance...
37 CFR 102.21 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-07-01
... a citizen of the United States or an alien lawfully admitted for permanent residence into the United... not limited to, test calculations of retirement benefits, explanations of health and life insurance...
37 CFR 102.21 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-07-01
... a citizen of the United States or an alien lawfully admitted for permanent residence into the United... not limited to, test calculations of retirement benefits, explanations of health and life insurance...
37 CFR 102.21 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a citizen of the United States or an alien lawfully admitted for permanent residence into the United... not limited to, test calculations of retirement benefits, explanations of health and life insurance...
37 CFR 102.21 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... a citizen of the United States or an alien lawfully admitted for permanent residence into the United... not limited to, test calculations of retirement benefits, explanations of health and life insurance...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
WebScope: A New Tool for Fusion Data Analysis and Visualization
NASA Astrophysics Data System (ADS)
Yang, Fei; Dang, Ningning; Xiao, Bingjia
2010-04-01
A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.
Pretest Predictions for Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Sun; H. Yang; H.N. Kalia
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
L.M. Montierth
2000-09-15
The objective of this calculation is to characterize the nuclear criticality safety concerns associated with the codisposal of the U.S. Department of Energy's (DOE) Shippingport Light Water Breeder Reactor (SP LWBR) Spent Nuclear Fuel (SNF) in a 5-Defense High-Level Waste (5-DHLW) Waste Package (WP), which is to be placed in a Monitored Geologic Repository (MGR). The scope of this calculation is limited to the determination of the effective neutron multiplication factor (K{sub eff}) for intact- and degraded-mode internal configurations of the codisposal WP containing Shippingport LWBR seed-type assemblies. The results of this calculation will be used to evaluate criticality issuesmore » and support the analysis that is planed to be performed to demonstrate the viability of the codisposal concept for the MGR. This calculation is associated with the waste package design and was performed in accordance with the DOE SNF Analysis Plan for FY 2000 (See Ref. 22). The document has been prepared in accordance with the Administrative Procedure AP-3.12Q, Calculations (Ref. 23).« less
A Simple Huckel Molecular Orbital Plotter
ERIC Educational Resources Information Center
Ramakrishnan, Raghunathan
2013-01-01
A program is described and presented to readily plot the molecular orbitals from a Huckel calculation. The main features of the program and the scope of its applicability are discussed through some example organic molecules. (Contains 2 figures.)
Zuend, Stephan J; Jacobsen, Eric N
2007-12-26
The mechanism of the enantioselective cyanosilylation of ketones catalyzed by tertiary amino-thiourea derivatives was investigated using a combination of experimental and theoretical methods. The kinetic analysis is consistent with a cooperative mechanism in which both the thiourea and the tertiary amine of the catalyst are involved productively in the rate-limiting cyanide addition step. Density functional theory calculations were used to distinguish between mechanisms involving thiourea activation of ketone or of cyanide in the enantioselectivity-determining step. The strong correlation obtained between experimental and calculated ee's for a range of substrates and catalysts provides support for the most favorable calculated transition structures involving amine-bound HCN adding to thiourea-bound ketone. The calculations suggest that enantioselectivity arises from direct interactions between the ketone substrate and the amino-acid derived portion of the catalyst. On the basis of this insight, more enantioselective catalysts with broader substrate scope were prepared and evaluated experimentally.
10 CFR 474.1 - Purpose and Scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION... procedures for calculating a value for the petroleum-equivalent fuel economy of electric vehicles, as... regulations at 40 CFR Part 600—Fuel Economy of Motor Vehicles. ...
10 CFR 474.1 - Purpose and Scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION... procedures for calculating a value for the petroleum-equivalent fuel economy of electric vehicles, as... regulations at 40 CFR Part 600—Fuel Economy of Motor Vehicles. ...
10 CFR 474.1 - Purpose and Scope.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION... procedures for calculating a value for the petroleum-equivalent fuel economy of electric vehicles, as... regulations at 40 CFR Part 600—Fuel Economy of Motor Vehicles. ...
10 CFR 474.1 - Purpose and Scope.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION... procedures for calculating a value for the petroleum-equivalent fuel economy of electric vehicles, as... regulations at 40 CFR Part 600—Fuel Economy of Motor Vehicles. ...
10 CFR 474.1 - Purpose and Scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION... procedures for calculating a value for the petroleum-equivalent fuel economy of electric vehicles, as... regulations at 40 CFR Part 600—Fuel Economy of Motor Vehicles. ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... Scope Should Exclude Off-Road/Non-DOT Specification Stamped Wheels. Comment 2: Whether Double Remedies... Centurion's Indirect Selling Expense Calculation. Comment 8: Hot-Rolled Steel Surrogate Value. Comment 9...
Pretest Predictions for Phase II Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiming Sun
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, and concrete pipe walls that will be developed during the Phase II ventilation tests involving various test conditions. The results will be used as inputs to validating numerical approach for modeling continuous ventilation, and be used to support the repository subsurface design. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the Phase II ventilation tests, and describe numerical methods that are used to calculate the effects of continuous ventilation. The calculation is limitedmore » to thermal effect only. This engineering work activity is conducted in accordance with the ''Technical Work Plan for: Subsurface Performance Testing for License Application (LA) for Fiscal Year 2001'' (CRWMS M&O 2000d). This technical work plan (TWP) includes an AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', activity evaluation (CRWMS M&O 2000d, Addendum A) that has determined this activity is subject to the YMP quality assurance (QA) program. The calculation is developed in accordance with the AP-3.12Q procedure, ''Calculations''. Additional background information regarding this activity is contained in the ''Development Plan for Ventilation Pretest Predictive Calculation'' (DP) (CRWMS M&O 2000a).« less
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
Scoping estimates of the LDEF satellite induced radioactivity
NASA Technical Reports Server (NTRS)
Armstrong, Tony W.; Colborn, B. L.
1990-01-01
The Long Duration Exposure Facility (LDEF) satellite was recovered after almost six years in space. It was well-instrumented with ionizing radiation dosimeters, including thermoluminescent dosimeters, plastic nuclear track detectors, and a variety of metal foil samples for measuring nuclear activation products. The extensive LDEF radiation measurements provide the type of radiation environments and effects data needed to evaluate and help resolve uncertainties in present radiation models and calculational methods. A calculational program was established to aid in LDEF data interpretation and to utilize LDEF data for assessing the accuracy of current models. A summary of the calculational approach is presented. The purpose of the reported calculations is to obtain a general indication of: (1) the importance of different space radiation sources (trapped, galactic, and albedo protons, and albedo neutrons); (2) the importance of secondary particles; and (3) the spatial dependence of the radiation environments and effects expected within the spacecraft. The calculational method uses the High Energy Transport Code (HETC) to estimate the importance of different sources and secondary particles in terms of fluence, absorbed dose in tissue and silicon, and induced radioactivity as a function of depth in aluminum.
End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems
2007-06-01
END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s
5 CFR 1201.72 - Explanation and scope of discovery.
Code of Federal Regulations, 2010 CFR
2010-01-01
... obtain relevant information, including the identification of potential witnesses, from another person or a party, that the other person or party has not otherwise provided. Relevant information includes information that appears reasonably calculated to lead to the discovery of admissible evidence. This...
42 CFR 422.300 - Basis and scope.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for making payments to Medicare Advantage (MA) organizations offering local and regional MA plans, including calculation of MA capitation rates and benchmarks, conditions under which payment is based on plan....458 in subpart J for rules on risk sharing payments to MA regional organizations. ...
Developing a data governance model in health care.
Reeves, Mary G; Bowen, Rita
2013-02-01
When building a data governance model, finance leaders should: Establish a leadership team and define the program's scope. Calculate the return using the confidence in data-dependent assumptions metric. Identify specific areas of deficiency and create a budget to address these areas.
Fast calculation of low altitude disturbing gravity for ballistics
NASA Astrophysics Data System (ADS)
Wang, Jianqiang; Wang, Fanghao; Tian, Shasha
2018-03-01
Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.
Nuclear physics from Lattice QCD
NASA Astrophysics Data System (ADS)
Shanahan, Phiala
2017-09-01
I will discuss the current state and future scope of numerical Lattice Quantum Chromodynamics (LQCD) calculations of nuclear matrix elements. The goal of the program is to provide direct QCD calculations of nuclear observables relevant to experimental programs, including double-beta decay matrix elements, nuclear corrections to axial matrix elements relevant to long-baseline neutrino experiments and nuclear sigma terms needed for theory predictions of dark matter cross-sections at underground detectors. I will discuss the progress and challenges on these fronts, and also address recent work constraining a gluonic analogue of the EMC effect, which will be measurable at a future electron-ion collider.
45 CFR 305.60 - Types and scope of Federal audits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.60... more frequently if the State fails to meet performance standards and reliability of data requirements... used to process the data in calculating performance indicators under this part; (b) Also, OCSE will...
Ballard, Andrew; Ahmad, Hiwa O.; Narduolo, Stefania; Rosa, Lucy; Chand, Nikki; Cosgrove, David A.; Varkonyi, Peter; Asaad, Nabil; Tomasi, Simone
2017-01-01
Abstract Racemization has a large impact upon the biological properties of molecules but the chemical scope of compounds with known rate constants for racemization in aqueous conditions was hitherto limited. To address this remarkable blind spot, we have measured the kinetics for racemization of 28 compounds using circular dichroism and 1H NMR spectroscopy. We show that rate constants for racemization (measured by ourselves and others) correlate well with deprotonation energies from quantum mechanical (QM) and group contribution calculations. Such calculations thus provide predictions of the second‐order rate constants for general‐base‐catalyzed racemization that are usefully accurate. When applied to recent publications describing the stereoselective synthesis of compounds of purported biological value, the calculations reveal that racemization would be sufficiently fast to render these expensive syntheses pointless. PMID:29072355
Determination of NMR chemical shifts for cholesterol crystals from first-principles
NASA Astrophysics Data System (ADS)
Kucukbenli, Emine; de Gironcoli, Stefano
2011-03-01
Solid State Nuclear Magnetic Resonance (NMR) is a powerful tool in crystallography when combined with theoretical predictions. So far, empirical calculations of spectra have been employed for an unambiguous identification. However, many complex systems are outside the scope of these methods. Our implementation of ultrasoft and projector augmented wave pseudopotentials within ab initio gauge including projector augmented plane wave (GIPAW) method in Quantum Espresso simulation package allows affordable calculations of NMR spectra for systems of thousands of electrons. We report here the first ab initio determination of NMR spectra for several crystal structures of cholesterol. Cholesterol crystals, the main component of human gallstones, are of interest to medical research as their structural properties can shed light on the pathologies of gallbladder. With our application we show that ab initio calculations can be employed to aid NMR crystallography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pytel, K.; Mieleszczenko, W.; Lechniak, J.
2010-03-01
The presented paper contains neutronic and thermal-hydraulic (for steady and unsteady states) calculation results prepared to support annex to Safety Analysis Report for MARIA reactor in order to obtain approval for program of testing low-enriched uranium (LEU) lead test fuel assemblies (LTFA) manufactured by CERCA. This includes presentation of the limits and operational constraints to be in effect during the fuel testing investigations. Also, the scope of testing program (which began in August 2009), including additional measurements and monitoring procedures, is described.
NASA Astrophysics Data System (ADS)
Fu, Qingshan; Xue, Yongqiang; Cui, Zixiang; Duan, Huijuan
2017-07-01
A rational melting model is indispensable to address the fundamental issue regarding the melting of nanoparticles. To ascertain the rationality and the application scopes of the three classical thermodynamic models, namely Pawlow, Rie, and Reiss melting models, corresponding accurate equations for size-dependent melting temperature of nanoparticles were derived. Comparison of the melting temperatures of Au, Al, and Sn nanoparticles calculated by the accurate equations with available experimental results demonstrates that both Reiss and Rie melting models are rational and capable of accurately describing the melting behaviors of nanoparticles at different melting stages. The former (surface pre-melting) is applicable to the stage from initial melting to critical thickness of liquid shell, while the latter (solid particles surrounded by a great deal of liquid) from the critical thickness to complete melting. The melting temperatures calculated by the accurate equation based on Reiss melting model are in good agreement with experimental results within the whole size range of calculation compared with those by other theoretical models. In addition, the critical thickness of liquid shell is found to decrease with particle size decreasing and presents a linear variation with particle size. The accurate thermodynamic equations based on Reiss and Rie melting models enable us to quantitatively and conveniently predict and explain the melting behaviors of nanoparticles at all size range in the whole melting process. [Figure not available: see fulltext.
Naval Waste Package Design Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. Schmitt
2006-12-13
The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license applicationmore » design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.« less
Electronic and Spectral Properties of RRhSn (R = Gd, Tb) Intermetallic Compounds
NASA Astrophysics Data System (ADS)
Knyazev, Yu. V.; Lukoyanov, A. V.; Kuz'min, Yu. I.; Gupta, S.; Suresh, K. G.
2018-02-01
The investigations of electronic structure and optical properties of GdRhSn and TbRhSn were carried out. The calculations of band spectrum, taking into account the spin polarization, were performed in a local electron density approximation with a correction for strong correlation effects in 4f shell of rare earth metal (LSDA + U method). The optical studies were done by ellipsometry in a wide range of wavelengths, and the set of spectral and electronic characteristics was determined. It was shown that optical absorption in a region of interband transitions has a satisfactory explanation within a scope of calculations of density of electronic states carried out.
NASA Astrophysics Data System (ADS)
Kressig, A.
2017-12-01
BACKGROUND The Greenhouse Gas Protocol (GHGP), Scope 2 Guidance standardizes how companies measure greenhouse gas emissions from purchased or independently generated electricity (called "scope 2 emissions"). Additionally, the interlinkages between industrial or commercial (nonresidential) energy requirements and water demands have been studied extensively, mostly at the national or provincial scale, focused on industries involved in power generation. However there is little guidance available for companies to systematically and effectively quantify water withdrawals and consumption (herein referred to as "water demand") associated with purchased or acquired electricity(what we call "Scope 2 Water"). This lack of guidance on measuring a company's water demand from electricity use is due to a lack of data on average consumption and withdrawal rates of water associated with purchased electricity. OBJECTIVE There is growing demand from companies in the food, beverage, manufacturing, information communication and technology, and other sectors for a methodology to quantify Scope 2 water demands. By understanding Scope 2 water demands, companies could evaluate their exposure to water-related risks associated with purchased or acquired electricity, and quantify the water benefits of changing to less water-intensive sources of electricity and energy generation such as wind and solar. However, there has never been a way of quantifying Scope 2 Water consumption and withdrawals for a company across its international supply chain. Even with interest in understanding exposure to water related risk and measuring water use reductions, there has been no quantitative way of measuring this information. But WRI's Power Watch provides the necessary data to allow for the Scope 2 Water accounting, because it will provide water withdrawal and consumption rates associated with purchased electricity at the power plant level. By calculating the average consumption and withdrawal rates per unit of electricity produced across a grid region, companies can measure their water demand from facilities in that region. WRI is now developing a global dataset of grid level water consumption rates and developing a guidance for companies to report water demand across their supply chain and measure their reductions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durant, W.S.; robinette, R.J.; Kirchner, J.R.
1994-03-01
In essence, this study was envisioned as the ``combination`` of existing accident dose and risk calculations from safety analyses of individual facilities. However, because of the extended time period over which the safety analyses were prepared, calculational assumptions and methodologies differed between the analyses. The scope of this study therefore included the standardization of assumptions and calculations as necessary to insure that the analytical logic was consistent for all the facilities. Each of the nonseismic external events considered in the analyses are addressed in individual sections in this report. In Section 2, extreme straight-line winds are examined. Section 3 addressesmore » tornadoes, and Section 4 addresses other external events [floods, other extreme weather events (lightning, hail, and extremes in temperature or precipitation), vehicle impact, accidents involving adjacent facilities, aircraft impact, and meteorite impact]. Section 5 provides a summary of the general conclusions of the report.« less
Kalininskaya, A A; Mescheryakov, D G; Ildarov, R B
2013-01-01
The article presents scope of work, algorithms of labor operations, standardization of work of stomatologist-therapeutist in the conditions of working together with assistant-stomatological in four hands. The calculations are given concerning the standard numbers of positions of stomatologist in new conditions of work.
Determine Baseline Energy Consumption | Climate Neutral Research Campuses |
the campus boundary and any off-site energy impacts you will be calculating. For example, the fuel usually included in the baseline. However, the impacts of joint ventures that take place off-site are Web page. Scope 3: Transportation impacts from commuters and business travel, which can be derived
Mission Options Scoping Tool for Mars Orbiters: Mass Cost Calculator (MC2)
NASA Technical Reports Server (NTRS)
Sturm, Eric J., II; Deutsch, Marie-Jose; Harmon, Corey; Nakagawa, Roy; Kinsey, Robert; Lopez, Nino; Kudrle, Paul; Evans, Alex
2007-01-01
Prior to developing the details of an advanced mission study, the mission architecture trade space is typically explored to assess the scope of feasible options. This paper describes the main features of an Excel-based tool, called the Mass-Cost-Calculator (MC2 ), which is used to perform rapid, high-level mass and cost options analyses of Mars orbiter missions. MC2 consists of a combination of databases, analytical solutions, and parametric relationships to enable quick evaluation of new mission concepts and comparison of multiple architecture options. The tool's outputs provide program management and planning teams with answers to "what if" queries, as well as an understanding of the driving mission elements, during the pre-project planning phase. These outputs have been validated against the outputs generated by the Advanced Projects Design Team (Team X) at NASA's Jet Propulsion Laboratory (JPL). The architecture of the tool allows for future expansion to other orbiters beyond Mars, and to non-orbiter missions, such as those involving fly-by spacecraft, probes, landers, rovers, or other mission elements.
Gourlaouen, Christophe; Piquemal, Jean-Philip; Parisel, Olivier
2006-05-07
Within the scope of studying the molecular implications of the Pb(2+) cation in environmental and polluting processes, this paper reports Hartree-Fock and density functional theory (B3LYP) four-component relativistic calculations using an all-electron basis set applied to [Pb(H(2)O)](2+) and [Pb(OH)](+), two complexes expected to be found in the terrestrial atmosphere. It is shown that full-relativistic calculations validate the use of scalar relativistic approaches within the framework of density functional theory. [Pb(H(2)O)](2+) is found C(2v) at any level of calculations whereas [Pb(OH)](+) can be found bent or linear depending of the computational methodology used. When C(s) is found the barrier to inversion through the C(infinityv) structure is very low, and can be overcome at high enough temperature, making the molecule floppy. In order to get a better understanding of the bonding occurring between the Pb(2+) cation and the H(2)O and OH(-) ligands, natural bond orbital and atoms-in-molecule calculations have been performed. These approaches are supplemented by a topological analysis of the electron localization function. Finally, the description of these complexes is refined using constrained-space orbital variation complexation energy decompositions.
Weiss, Manfred; Marx, Gernot; Iber, Thomas
2017-01-01
Intensive care medicine remains one of the most cost-driving areas within hospitals with high personnel costs. Under the scope of limited budgets and reimbursement, realistic needs are essential to justify personnel staffing. Unfortunately, all existing staffing models are top-down calculations with a high variability in results. We present a workload-oriented model, integrating quality of care, efficiency of processes, legal, educational, controlling, local, organisational and economic aspects. In our model, the physician’s workload solely related to the intensive care unit depends on three tasks: Patient-oriented tasks, divided in basic tasks (performed in every patient) and additional tasks (necessary in patients with specific diagnostic and therapeutic requirements depending on their specific illness, only), and non patient-oriented tasks. All three tasks have to be taken into account for calculating the required number of physicians. The calculation tool further allows to determine minimal personnel staffing, distribution of calculated personnel demand regarding type of employee due to working hours per year, shift work or standby duty. This model was introduced and described first by the German Board of Anesthesiologists and the German Society of Anesthesiology and Intensive Care Medicine in 2008 and since has been implemented and updated 2012 in Germany. The modular, flexible nature of the Excel-based calculation tool should allow adaption to the respective legal and organizational demands of different countries. After 8 years of experience with this calculation, we report the generalizable key aspects which may help physicians all around the world to justify realistic workload-oriented personnel staffing needs. PMID:28828300
Weiss, Manfred; Marx, Gernot; Iber, Thomas
2017-08-04
Intensive care medicine remains one of the most cost-driving areas within hospitals with high personnel costs. Under the scope of limited budgets and reimbursement, realistic needs are essential to justify personnel staffing. Unfortunately, all existing staffing models are top-down calculations with a high variability in results. We present a workload-oriented model, integrating quality of care, efficiency of processes, legal, educational, controlling, local, organisational and economic aspects. In our model, the physician's workload solely related to the intensive care unit depends on three tasks: Patient-oriented tasks, divided in basic tasks (performed in every patient) and additional tasks (necessary in patients with specific diagnostic and therapeutic requirements depending on their specific illness, only), and non patient-oriented tasks. All three tasks have to be taken into account for calculating the required number of physicians. The calculation tool further allows to determine minimal personnel staffing, distribution of calculated personnel demand regarding type of employee due to working hours per year, shift work or standby duty. This model was introduced and described first by the German Board of Anesthesiologists and the German Society of Anesthesiology and Intensive Care Medicine in 2008 and since has been implemented and updated 2012 in Germany. The modular, flexible nature of the Excel-based calculation tool should allow adaption to the respective legal and organizational demands of different countries. After 8 years of experience with this calculation, we report the generalizable key aspects which may help physicians all around the world to justify realistic workload-oriented personnel staffing needs.
Metallization and superconductivity in Ca-intercalated bilayer MoS2
NASA Astrophysics Data System (ADS)
Szczȱśniak, R.; Durajski, A. P.; Jarosik, M. W.
2017-12-01
A two-dimensional molybdenum disulfide (MoS2) has attracted significant interest recently due to its outstanding physical, chemical and optoelectronic properties. In this paper, using the first-principles calculations, the dynamical stability, electronic structure and superconducting properties of Ca-intercalated bilayer MoS2 are investigated. The calculated electron-phonon coupling constant implies that the stable form of investigated system is a strong-coupling superconductor (λ = 1.05) with a low value of critical temperature (TC = 13.3 K). Moreover, results obtained within the framework of the isotropic Migdal-Eliashberg formalism proved that Ca-intercalated bilayer MoS2 exhibits behavior that goes beyond the scope of the conventional BCS theory.
Music, Pandas, and Muggers: On the Affective Psychology of Value
ERIC Educational Resources Information Center
Hsee, Christopher K.; Rottenstreich, Yuval
2004-01-01
This research investigated the relationship between the magnitude or scope of a stimulus and its subjective value by contrasting 2 psychological processes that may be used to construct preferences: valuation by feeling and valuation by calculation. The results show that when people rely on feeling, they are sensitive to the presence or absence of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
...'s statements of account shall set forth each step of its calculations with sufficient information to... available by persons making digital phonorecord deliveries.'' Register's Division of Authority Decision... type of information in a notice of use (but not in the statement of account) to be served on the...
NASA Astrophysics Data System (ADS)
Lei, S.; Osborne, P.
2016-12-01
The Scoping of Options and Analyzing Risk (SOAR) model was developed by the U.S. Nuclear Regulatory Commission staff to assist in their evaluation of potential high-level radioactive waste disposal options. It is a 1-D contaminant transport code that contains a biosphere module to calculate mass fluxes and radiation dose to humans. As part of the Canadian Nuclear Safety Commission (CNSC)'s Coordinated Assessment Program to assist with the review of proposals for deep geological repositories (DGR's) for nuclear fuel wastes, CNSC conducted a research project to find out whether SOAR can be used by CNSC staff as an independent scoping tool to assist review of proponents' submissions related to safety assessment for DGRs. In the research, SOAR was applied to the post-closure safety assessment for a hypothetical DGR in sedimentary rock, as described in the 5th Case Study report by the Nuclear Waste Management Organization (NWMO) of Canada (2011). The report contains, among others, modeling of transport and releases of radionuclides at various locations within the geosphere and the radiation dose to humans over a period of one million years. One aspect covered was 1-D modeling of various scenarios and sensitivity cases with both deterministic and probabilistic approaches using SYVAC3-CC4, which stands for Systems Variability Analysis Code (generation 3, Canadian Concept generation 4), developed by Atomic Energy of Canada Limited (Kitson et al., 2000). Radionuclide fluxes and radiation dose to the humans calculated using SOAR were compared with that from NWMO's modeling. Overall, the results from the two models were similar, although SOAR gave lower mass fluxes and peak dose, mainly due to differences in modeling the waste package configurations. Sensitivity analyses indicate that both models are most sensitive to the diffusion coefficient of the geological media. The research leads to the conclusion that SOAR is a robust, user friendly, and flexible scoping tool that CNSC staff may use for safety assessments; however, some improvements may be needed, such as including dose contributions from other pathways in addition to drinking water and being more flexible for modeling different waste package configurations.
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
Neonatal nurse practitioners: distribution, roles and scope of practice.
Freed, Gary L; Dunham, Kelly M; Lamarand, Kara E; Loveland-Cherry, Carol; Martyn, Kristy K
2010-11-01
We sought to determine the distribution and scope of practice of the neonatal nurse practitioner (NNP) workforce across the United States. To determine distribution, we used counts of certified NNPs from the National Certification Corp (Chicago, IL). We calculated state NNP/child population ratios as the number of NNPs divided by the state population 0 to 17 years of age. We calculated NNP/NICU bed ratios as the number of NNPs divided by the total number of NICU beds per state. To characterize roles and scope of practice, we conducted a mail survey of a random national sample of 300 NNPs in states that license nurse practitioners to practice independently and 350 NNPs in states that require physician involvement. The greatest concentrations of NNPs per capita were in the Midwest, South, and Mid-Atlantic region. Thirty-one states had <100 total NNPs. The survey response rate was 77.1%. More than one-half of NNP respondents (54% [n = 211]) reported that they spent the majority of their time in a community hospital, whereas more than one-third (37% [n = 144]) were in an academic health center. Only 2% (n = 7) reported that they engaged in independent practice. As with many health care professionals, the supply of NNPs may not be distributed according to need. With increasing concern regarding the availability of NNPs, comprehensive studies that examine the demand for NNPs and the roles of other clinicians in the NICU should provide a greater understanding of appropriate NICU workforce capacity and needs.
Power, Nicholas E; Silberstein, Jonathan L; Ghoneim, Tarek P; Guillonneau, Bertrand; Touijer, Karim A
2012-12-01
To attempt to quantitate the carbon footprint of minimally invasive surgery (MIS) through approximated scope 1 to 3 CO(2) emissions to identify its potential role in global warming. To estimate national usage, we determined the number of inpatient and outpatient MIS procedures using International Classification of Diseases, ninth revision-clinical modification codes for all MIS procedures in a 2009 sample collected in national databases. Need for surgery was considered essential, and therefore traditional open surgery was used as the comparator. Scope 1 (direct) CO(2) emissions resulting from CO(2) gas used for insufflation were based on both escaping procedural CO(2) and metabolic CO(2) eliminated via respiration. Scopes 2 and 3 (indirect) emissions related to capture, compression, and transportation of CO(2) to hospitals and the disposal of single-use equipment not used in open surgery were calculated. The total CO(2) emissions were calculated to be 355,924 tonnes/year. For perspective, if MIS in the United States was considered a country, it would rank 189 th on the United Nations 2008 list of countries' carbon emissions per year. Limitations include the inability to account for uncertainty using the various models and tools for approximating CO(2) emissions. CO(2) emission of MIS in the United States may have a significant environmental impact. This is the first attempt to quantify CO(2) emissions related to MIS in the United States. Strategies for reduction, while maintaining high quality medical care, should be considered.
Waste Characterization Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Patrick E.
2014-11-01
The purpose is to provide guidance to the Radiological Characterization Reviewer to complete the radiological characterization of waste items. This information is used for Department of Transportation (DOT) shipping and disposal, typically at the Nevada National Security Site (NNSS). Complete characterization ensures compliance with DOT shipping laws and NNSS Waste Acceptance Criteria (WAC). The fines for noncompliance can be extreme. This does not include possible bad press, and endangerment to the public, employees and the environment. A Radiological Characterization Reviewer has an important role in the organization. The scope is to outline the characterization process, but does not to includemore » every possible situation. The Radiological Characterization Reviewer position requires a strong background in Health Physics; therefore, these concepts are minimally addressed. The characterization process includes many Excel spreadsheets that were developed by Michael Enghauser known as the WCT software suite. New Excel spreadsheets developed as part of this project include the Ra- 226 Decider and the Density Calculator by Jesse Bland, MicroShield Density Calculator and Molecular Weight Calculator by Pat Lambert.« less
System Design of One-chip Wave Particle Interaction Analyzer for SCOPE mission.
NASA Astrophysics Data System (ADS)
Fukuhara, Hajime; Ueda, Yoshikatsu; Kojima, Hiro; Yamakawa, Hiroshi
In past science spacecrafts such like GEOTAIL, we usually capture electric and magnetic field waveforms and observe energetic eletron and ion particles as velocity distributions by each sensor. We analyze plasma wave-particle interactions by these respective data and the discussions are sometimes restricted by the difference of time resolution and by the data loss in desired regions. One-chip Wave Particle Interaction Analyzer (OWPIA) conducts direct quantitative observations of wave-particle interaction by direct 'E dot v' calculation on-board. This new instruments have a capability to use all plasma waveform data and electron particle informations. In the OWPIA system, we have to calibrate the digital observation data and transform the same coordinate system. All necessary calculations are processed in Field Programmable Gate Array(FPGA). In our study, we introduce a basic concept of the OWPIA system and a optimization method for each calculation functions installed in FPGA. And we also discuss the process speed, the FPGA utilization efficiency, the total power consumption.
A retention index calculator simplifies identification of plant volatile organic compounds.
Lucero, Mary; Estell, Rick; Tellez, María; Fredrickson, Ed
2009-01-01
Plant volatiles (PVOCs) are important targets for studies in natural products, chemotaxonomy and biochemical ecology. The complexity of PVOC profiles often limits research to studies targeting only easily identified compounds. With the availability of mass spectral libraries and recent growth of retention index (RI) libraries, PVOC identification can be achieved using only gas chromatography coupled to mass spectrometry (GCMS). However, RI library searching is not typically automated, and until recently, RI libraries were both limited in scope and costly to obtain. To automate RI calculation and lookup functions commonly utilised in PVOC analysis. Formulae required for calculating retention indices from retention time data were placed in a spreadsheet along with lookup functions and a retention index library. Retention times obtained from GCMS analysis of alkane standards and Koeberlinia spinosa essential oil were entered into the spreadsheet to determine retention indices. Indices were used in combination with mass spectral analysis to identify compounds contained in Koeberlinia spinosa essential oil. Eighteen compounds were positively identified. Total oil yield was low, with only 5 ppm in purple berries. The most abundant compounds were octen-3-ol and methyl salicylate. The spreadsheet accurately calculated RIs of the detected compounds. The downloadable spreadsheet tool developed for this study provides a calculator and RI library that works in conjuction with GCMS or other analytical techniques to identify PVOCs in plant extracts.
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza
2018-02-01
The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.
Electron impact ionization of metastable 2P-state hydrogen atoms in the coplanar geometry
NASA Astrophysics Data System (ADS)
Dhar, S.; Nahar, N.
Triple differential cross sections (TDCS) for the ionization of metastable 2P-state hydrogen atoms by electrons are calculated for various kinematic conditions in the asymmetric coplanar geometry. In this calculation, the final state is described by a multiple-scattering theory for ionization of hydrogen atoms by electrons. Results show qualitative agreement with the available experimental data and those of other theoretical computational results for ionization of hydrogen atoms from ground state, and our first Born results. There is no available other theoretical results and experimental data for ionization of hydrogen atoms from the 2P state. The present study offers a wide scope for the experimental study for ionization of hydrogen atoms from the metastable 2P state.
Torsion effect of swing frame on the measurement of horizontal two-plane balancing machine
NASA Astrophysics Data System (ADS)
Wang, Qiuxiao; Wang, Dequan; He, Bin; Jiang, Pan; Wu, Zhaofu; Fu, Xiaoyan
2017-03-01
In this paper, the vibration model of swing frame of two-plane balancing machine is established to calculate the vibration center position of swing frame first. The torsional stiffness formula of spring plate twisting around the vibration center is then deduced by using superposition principle. Finally, the dynamic balancing experiments prove the irrationality of A-B-C algorithm which ignores the torsion effect, and show that the torsional stiffness deduced by experiments is consistent with the torsional stiffness calculated by theory. The experimental datas show the influence of the torsion effect of swing frame on the separation ratio of sided balancing machines, which reveals the sources of measurement error and assesses the application scope of A-B-C algorithm.
Calculating Free Energy Changes in Continuum Solvation Models
Ho, Junming; Ertem, Mehmed Z.
2016-02-27
We recently showed for a large dataset of pK as and reduction potentials that free energies calculated directly within the SMD continuum model compares very well with corresponding thermodynamic cycle calculations in both aqueous and organic solvents (Phys. Chem. Chem. Phys. 2015, 17, 2859). In this paper, we significantly expand the scope of our study to examine the suitability of this approach for the calculation of general solution phase kinetics and thermodynamics, in conjunction with several commonly used solvation models (SMDM062X, SMD-HF, CPCM-UAKS, and CPCM-UAHF) for a broad range of systems and reaction types. This includes cluster-continuum schemes for pKmore » a calculations, as well as various neutral, radical and ionic reactions such as enolization, cycloaddition, hydrogen and chlorine atom transfer, and bimolecular SN2 and E2 reactions. On the basis of this benchmarking study, we conclude that the accuracies of both approaches are generally very similar – the mean errors for Gibbs free energy changes of neutral and ionic reactions are approximately 5 kJ mol -1 and 25 kJ mol -1 respectively. In systems where there are significant structural changes due to solvation, as is the case for certain ionic transition states and amino acids, the direct approach generally afford free energy changes that are in better agreement with experiment. The results indicate that when appropriate combinations of electronic structure methods are employed, the direct approach provides a reliable alternative to the thermodynamic cycle calculations of solution phase kinetics and thermodynamics across a broad range of organic reactions.« less
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Streak, Judith Christine; Yu, Derek; Van der Berg, Servaas
2009-01-01
This paper offers evidence on the sensitivity of child poverty in South Africa to changes in the adult equivalence scale (AES) and updates the child poverty profile based on the Income and Expenditure Survey 2005/06. Setting the poverty line at the 40th percentile of households calculated with different AESs the scope and composition of child…
Bárcenas, M; Reyes, Y; Romero-Martínez, A; Odriozola, G; Orea, P
2015-02-21
Coexistence and interfacial properties of a triangle-well (TW) fluid are obtained with the aim of mimicking the Lennard-Jones (LJ) potential and approach the properties of noble gases. For this purpose, the scope of the TW is varied to match vapor-liquid densities and surface tension. Surface tension and coexistence curves of TW systems with different ranges were calculated with replica exchange Monte Carlo and compared to those data previously reported in the literature for truncated and shifted (STS), truncated (ST), and full Lennard-Jones (full-LJ) potentials. We observed that the scope of the TW potential must be increased to approach the STS, ST, and full-LJ properties. In spite of the simplicity of TW expression, a remarkable agreement is found. Furthermore, the variable scope of the TW allows for a good match of the experimental data of argon and xenon.
Clarifying changes in student empathy throughout medical school: a scoping review.
Ferreira-Valente, Alexandra; Monteiro, Joana S; Barbosa, Rita M; Salgueira, Ana; Costa, Patrício; Costa, Manuel J
2017-12-01
Despite the increasing awareness of the relevance of empathy in patient care, some findings suggest that medical schools may be contributing to the deterioration of students' empathy. Therefore, it is important to clarify the magnitude and direction of changes in empathy during medical school. We employed a scoping review to elucidate trends in students' empathy changes/differences throughout medical school and examine potential bias associated with research design. The literature published in English, Spanish, Portuguese and French from 2009 to 2016 was searched. Two-hundred and nine potentially relevant citations were identified. Twenty articles met the inclusion criteria. Effect sizes of empathy scores variations were calculated to assess the practical significance of results. Our results demonstrate that scoped studies differed considerably in their design, measures used, sample sizes and results. Most studies (12 out of 20 studies) reported either positive or non-statistically significant changes/differences in empathy regardless of the measure used. The predominant trend in cross-sectional studies (ten out of 13 studies) was of significantly higher empathy scores in later years or of similar empathy scores across years, while most longitudinal studies presented either mixed-results or empathy declines. There was not a generalized international trend in changes in students' empathy throughout medical school. Although statistically significant changes/differences were detected in 13 out of 20 studies, the calculated effect sizes were small in all but two studies, suggesting little practical significance. At the present moment, the literature does not offer clear conclusions relative to changes in student empathy throughout medical school.
Power, Nicholas E.; Silberstein, Jonathan L.; Ghoneim, Tarek P.; Guillonneau, Bertrand
2012-01-01
Abstract Purpose To attempt to quantitate the carbon footprint of minimally invasive surgery (MIS) through approximated scope 1 to 3 CO2 emissions to identify its potential role in global warming. Patients and Methods To estimate national usage, we determined the number of inpatient and outpatient MIS procedures using International Classification of Diseases, ninth revision-clinical modification codes for all MIS procedures in a 2009 sample collected in national databases. Need for surgery was considered essential, and therefore traditional open surgery was used as the comparator. Scope 1 (direct) CO2 emissions resulting from CO2 gas used for insufflation were based on both escaping procedural CO2 and metabolic CO2 eliminated via respiration. Scopes 2 and 3 (indirect) emissions related to capture, compression, and transportation of CO2 to hospitals and the disposal of single-use equipment not used in open surgery were calculated. Results The total CO2 emissions were calculated to be 355,924 tonnes/year. For perspective, if MIS in the United States was considered a country, it would rank 189th on the United Nations 2008 list of countries' carbon emissions per year. Limitations include the inability to account for uncertainty using the various models and tools for approximating CO2 emissions. Conclusion CO2 emission of MIS in the United States may have a significant environmental impact. This is the first attempt to quantify CO2 emissions related to MIS in the United States. Strategies for reduction, while maintaining high quality medical care, should be considered. PMID:22845049
Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact
Leach, Allison M.; Compton, Jana E.; Galloway, James N.; Andrews, Jennifer
2017-01-01
Abstract When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate nitrogen footprints using the Nitrogen Footprint Tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This article compares those seven institutions’ results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, amount of food served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The comparisons also pointed to differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. PMID:29350218
Comparing Institution Nitrogen Footprints: Metrics for ...
When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This paper compares the results of those seven results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, number of meals served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The results also found differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors that are both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. This paper is being submitt
Network-Based Analysis of Software Change Propagation
Wang, Rongcun; Qu, Binbin
2014-01-01
The object-oriented software systems frequently evolve to meet new change requirements. Understanding the characteristics of changes aids testers and system designers to improve the quality of softwares. Identifying important modules becomes a key issue in the process of evolution. In this context, a novel network-based approach is proposed to comprehensively investigate change distributions and the correlation between centrality measures and the scope of change propagation. First, software dependency networks are constructed at class level. And then, the number of times of cochanges among classes is minded from software repositories. According to the dependency relationships and the number of times of cochanges among classes, the scope of change propagation is calculated. Using Spearman rank correlation analyzes the correlation between centrality measures and the scope of change propagation. Three case studies on java open source software projects Findbugs, Hibernate, and Spring are conducted to research the characteristics of change propagation. Experimental results show that (i) change distribution is very uneven; (ii) PageRank, Degree, and CIRank are significantly correlated to the scope of change propagation. Particularly, CIRank shows higher correlation coefficient, which suggests it can be a more useful indicator for measuring the scope of change propagation of classes in object-oriented software system. PMID:24790557
Network-based analysis of software change propagation.
Wang, Rongcun; Huang, Rubing; Qu, Binbin
2014-01-01
The object-oriented software systems frequently evolve to meet new change requirements. Understanding the characteristics of changes aids testers and system designers to improve the quality of softwares. Identifying important modules becomes a key issue in the process of evolution. In this context, a novel network-based approach is proposed to comprehensively investigate change distributions and the correlation between centrality measures and the scope of change propagation. First, software dependency networks are constructed at class level. And then, the number of times of cochanges among classes is minded from software repositories. According to the dependency relationships and the number of times of cochanges among classes, the scope of change propagation is calculated. Using Spearman rank correlation analyzes the correlation between centrality measures and the scope of change propagation. Three case studies on java open source software projects Findbugs, Hibernate, and Spring are conducted to research the characteristics of change propagation. Experimental results show that (i) change distribution is very uneven; (ii) PageRank, Degree, and CIRank are significantly correlated to the scope of change propagation. Particularly, CIRank shows higher correlation coefficient, which suggests it can be a more useful indicator for measuring the scope of change propagation of classes in object-oriented software system.
Vogtmann, Emily; Hua, Xing; Zhou, Liang; Wan, Yunhu; Suman, Shalabh; Zhu, Bin; Dagnall, Casey L; Hutchinson, Amy; Jones, Kristine; Hicks, Belynda D; Sinha, Rashmi; Shi, Jianxin; Abnet, Christian C
2018-05-01
Background: Few studies have prospectively evaluated the association between oral microbiota and health outcomes. Precise estimates of the intrasubject microbial metric stability will allow better study planning. Therefore, we conducted a study to evaluate the temporal variability of oral microbiota. Methods: Forty individuals provided six oral samples using the OMNIgene ORAL kit and Scope mouthwash oral rinses approximately every two months over 10 months. DNA was extracted using the QIAsymphony and the V4 region of the 16S rRNA gene was amplified and sequenced using the MiSeq. To estimate temporal variation, we calculated intraclass correlation coefficients (ICCs) for a variety of metrics and examined stability after clustering samples into distinct community types using Dirichlet multinomial models (DMMs). Results: The ICCs for the alpha diversity measures were high, including for number of observed bacterial species [0.74; 95% confidence interval (CI): 0.65-0.82 and 0.79; 95% CI: 0.75-0.94] from OMNIgene ORAL and Scope mouthwash, respectively. The ICCs for the relative abundance of the top four phyla and beta diversity matrices were lower. Three clusters provided the best model fit for the DMM from the OMNIgene ORAL samples, and the probability of remaining in a specific cluster was high (59.5%-80.7%). Conclusions: The oral microbiota appears to be stable over time for multiple metrics, but some measures, particularly relative abundance, were less stable. Impact: We used this information to calculate stability-adjusted power calculations that will inform future field study protocols and experimental analytic designs. Cancer Epidemiol Biomarkers Prev; 27(5); 594-600. ©2018 AACR . ©2018 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Kaiba, Tanja; Radulović, Vladimir; Žerovnik, Gašper; Snoj, Luka; Fourmentel, Damien; Barbot, LoÏc; Destouches, Christophe AE(; )
2018-01-01
Preliminary calculations were performed with the aim to establish optimal experimental conditions for the measurement campaign within the collaboration between the Jožef Stefan Institute (JSI) and Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA Cadarache). The goal of the project is to additionally characterize the neutron spectruminside the JSI TRIGA reactor core with focus on the measurement epi-thermal and fast part of the spectrum. Measurements will be performed with fission chambers containing different fissile materials (235U, 237Np and 242Pu) covered with thermal neutron filters (Cd and Gd). The changes in the detected signal and neutron flux spectrum with and without transmission filter were studied. Additional effort was put into evaluation of the effect of the filter geometry (e.g. opening on the top end of the filter) on the detector signal. After the analysis of the scoping calculations it was concluded to position the experiment in the outside core ring inside one of the empty fuel element positions.
The Use of Computers as a Design Tool.
1980-01-01
design programs for the technical management of complex fighter development projects. AIAA Paper No. 70-364, March 1970 22. J. Kondo: Application of...the scope and effectiveness of their use are sometimes considered suspect, especially by managers and decision makers who must depend, to some...uncertainty and the fact that the measured and calculated data cannot be easily combined often leave the project manager or designer SPIRAL PORTION
Architecture earth-sheltered buildings: Design Manual 1.4
NASA Astrophysics Data System (ADS)
1984-03-01
Design guidance is presented for use by experienced engineers and architects. The types of buildings within the scope of this manual include slab-on-grade, partially-buried (bermed) or fully-buried, and large (single-story or multistory) structures. New criteria unique to earth-sheltered design are included for the following disciplines: Planning, Landscape Design, Life-Cycle Analysis, Architectural, Structural, Mechanical (criteria include below-grade heat flux calculation procedures), and Electrical.
Jiang, Zhaoqin; Yang, Hui; Han, Xiao; Luo, Jie; Wong, Ming Wah; Lu, Yixin
2010-03-21
Primary amino acids and their derivatives were investigated as catalysts for the direct asymmetric aldol reactions between ketones and aldehydes in the presence of water, and L-tryptophan was shown to be the best catalyst. Solvent effects, substrate scope and the influence of water on the reactions were investigated. Quantum chemical calculations were performed to understand the origin of the observed stereoselectivity.
Assessment of nursing workload in adult psychiatric inpatient units: a scoping review.
Sousa, C; Seabra, P
2018-05-16
No systematic reviews on measurement tools in adult psychiatric inpatient settings exist in the literature, and thus, further research is required on ways to identify approaches to calculate safe nurse staffing levels based on patients' care needs in adult psychiatric inpatient units. To identify instruments that enable an assessment of nursing workload in psychiatric settings. Method A scoping review was conducted. Four studies were identified, with five instruments used to support the calculation of staff needs and workload. All four studies present methodological limitations. Two instruments have already been adapted to this specific context, but validation studies are lacking. The findings indicate that the tools used to evaluate nursing workload in these settings require further development, with the concomitant need for more research to clarify the definition of nursing workload as well as to identify factors with the greatest impact on nursing workload. This review highlights the need to develop tools to assess workload in psychiatric inpatient units that embrace patient-related and non-patient-related activities. The great challenge is to enable a sensitive perception of workload resulting from nurses' psychotherapeutic interventions, an important component of treatment for many patients. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
EBR-II Static Neutronic Calculations by PHISICS / MCNP6 codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paolo Balestra; Carlo Parisi; Andrea Alfonsi
2016-02-01
The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) on the Shutdown Heat Removal Tests (SHRT) performed in the '80s at the Experimental fast Breeder Reactor EBR-II, USA. The scope of the CRP is to improve and validate the simulation tools for the study and the design of the liquid metal cooled fast reactors. Moreover, training of the next generation of fast reactor analysts is being also considered the other scope of the CRP. In this framework, a static neutronic model was developed, using state-of-the art neutron transport codes like SCALE/PHISICS (deterministic solution) and MCNP6 (stochastic solution).more » Comparison between both solutions is briefly illustrated in this summary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.P. Nicot
The objective of this calculation is to estimate the quantity of fissile material that could accumulate in fractures in the rock beneath plutonium-ceramic (Pu-ceramic) and Mixed-Oxide (MOX) waste packages (WPs) as they degrade in the potential monitored geologic repository at Yucca Mountain. This calculation is to feed another calculation (Ref. 31) computing the probability of criticality in the systems described in Section 6 and then ultimately to a more general report on the impact of plutonium on the performance of the proposed repository (Ref. 32), both developed concurrently to this work. This calculation is done in accordance with the developmentmore » plan TDP-DDC-MD-000001 (Ref. 9), item 5. The original document described in item 5 has been split into two documents: this calculation and Ref. 4. The scope of the calculation is limited to only very low flow rates because they lead to the most conservative cases for Pu accumulation and more generally are consistent with the way the effluent from the WP (called source term in this calculation) was calculated (Ref. 4). Ref. 4 (''In-Drift Accumulation of Fissile Material from WPs Containing Plutonium Disposition Waste Forms'') details the evolution through time (breach time is initial time) of the chemical composition of the solution inside the WP as degradation of the fuel and other materials proceed. It is the chemical solution used as a source term in this calculation. Ref. 4 takes that same source term and reacts it with the invert; this calculation reacts it with the rock. In addition to reactions with the rock minerals (that release Si and Ca), the basic mechanisms for actinide precipitation are dilution and mixing with resident water as explained in Section 2.1.4. No other potential mechanism such as flow through a reducing zone is investigated in this calculation. No attempt was made to use the effluent water from the bottom of the invert instead of using directly the effluent water from the WP. This calculation supports disposal criticality analysis and has been prepared in accordance with AP-3.12Q, Calculations (Ref. 49). This calculation uses results from Ref. 4 on actinide accumulation in the invert and more generally does reference heavily the cited calculation. In addition to the information provided in this calculation, the reader is referred to the cited calculation for a more thorough treatment of items applying to both the invert and fracture system such as the choice of the thermodynamic database, the composition of J-13 well water, tuff composition, dissolution rate laws, Pu(OH){sub 4} solubility and also for details on the source term composition. The flow conditions (seepage rate, water velocity in fractures) in the drift and the fracture system beneath initially referred to the TSPA-VA because this work was prepared before the release of the work feeding the TSPA-SR. Some new information feeding the TSPA-SR has since been included. Similarly, the soon-to-be-qualified thermodynamic database data0.ymp has not been released yet.« less
Reliability model of disk arrays RAID-5 with data striping
NASA Astrophysics Data System (ADS)
Rahman, P. A.; D'K Novikova Freyre Shavier, G.
2018-03-01
Within the scope of the this scientific paper, the simplified reliability model of disk arrays RAID-5 (redundant arrays of inexpensive disks) and an advanced reliability model offered by the authors taking into the consideration nonzero time of the faulty disk replacement and different failure rates of disks in normal state of the disk array and in degraded and rebuild states are discussed. The formula obtained by the authors for calculation of the mean time to data loss (MTTDL) of the RAID-5 disk arrays on basis of the advanced model is also presented. Finally, the technique of estimation of the initial reliability parameters, which are used in the reliability model, and the calculation examples of the mean time to data loss of the RAID-5 disk arrays for the different number of disks are also given.
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Ganesh, Vijay; Lakhnech, Yassine; Munoz, Cesar; Owre, Sam; Ruess, Harald; Rushby, John; Rusu, Vlad; Saiedi, Hassen; Shankar, N.
2000-01-01
To become practical for assurance, automated formal methods must be made more scalable, automatic, and cost-effective. Such an increase in scope, scale, automation, and utility can be derived from an emphasis on a systematic separation of concerns during verification. SAL (Symbolic Analysis Laboratory) attempts to address these issues. It is a framework for combining different tools to calculate properties of concurrent systems. The heart of SAL is a language, developed in collaboration with Stanford, Berkeley, and Verimag for specifying concurrent systems in a compositional way. Our instantiation of the SAL framework augments PVS with tools for abstraction, invariant generation, program analysis (such as slicing), theorem proving, and model checking to separate concerns as well as calculate properties (i.e., perform, symbolic analysis) of concurrent systems. We. describe the motivation, the language, the tools, their integration in SAL/PAS, and some preliminary experience of their use.
Cross-Section Measurements via the Activation Technique at the Cologne Clover Counting Setup
NASA Astrophysics Data System (ADS)
Heim, Felix; Mayer, Jan; Netterdon, Lars; Scholz, Philipp; Zilges, Andreas
The activation technique is a widely used method for the determination of cross-section values for charged-particle induced reactions at astrophysically relevant energies. Since network calculations of nucleosynthesis processes often depend on reaction rates calculated in the scope of the Hauser-Feshbach statistical model, these cross-sections can be used to improve the nuclear-physics input-parameters like optical-model potentials (OMP), γ-ray strength functions, and nuclear level densities. In order to extend the available experimental database, the 108Cd(α, n)111Sn reaction cross section was investigated at ten energies between 10.2 and 13.5 MeV. As this reaction at these energies is almost only sensitive on the α-decay width, the results were compared to statistical model calculations using different models for the α-OMP. The irradiation as well as the consecutive γ-ray counting were performed at the Institute for Nuclear Physics of the University of Cologne using the 10 MV FN-Tandem accelerator and the Cologne Clover Counting Setup. This setup consists of two clover- type high purity germanium (HPGe) detectors in a close face-to-face geometry to cover a solid angle of almost 4π.
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. D. Cecil; L. L. Knobel; J. R. Green
2000-06-01
The purpose of this report is to describe the calculated contribution to ground water of natural, in situ produced 36Cl in the eastern Snake River Plain aquifer and to compare these concentrations in ground water with measured concentrations near a nuclear facility in southeastern Idaho. The scope focused on isotopic and chemical analyses and associated 36Cl in situ production calculations on 25 whole-rock samples from 6 major water-bearing rock types present in the eastern Snake River Plain. The rock types investigated were basalt, rhyolite, limestone, dolomite, shale, and quartzite. Determining the contribution of in situ production to 36Cl inventories inmore » ground water facilitated the identification of the source for this radionuclide in environmental samples. On the basis of calculations reported here, in situ production of 36Cl was determined to be insignificant compared to concentrations measured in ground water near buried and injected nuclear waste at the INEEL. Maximum estimated 36Cl concentrations in ground water from in situ production are on the same order of magnitude as natural concentrations in meteoric water.« less
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots.
Kazemi, Masoud; Åqvist, Johan
2015-06-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots
Kazemi, Masoud; Åqvist, Johan
2015-01-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies. PMID:26028237
A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma
NASA Technical Reports Server (NTRS)
Scott, Carl D.
1993-01-01
This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.
ELF: An Extended-Lagrangian Free Energy Calculation Module for Multiple Molecular Dynamics Engines.
Chen, Haochuan; Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng
2018-06-18
Extended adaptive biasing force (eABF), a collective variable (CV)-based importance-sampling algorithm, has proven to be very robust and efficient compared with the original ABF algorithm. Its implementation in Colvars, a software addition to molecular dynamics (MD) engines, is, however, currently limited to NAMD and LAMMPS. To broaden the scope of eABF and its variants, like its generalized form (egABF), and make them available to other MD engines, e.g., GROMACS, AMBER, CP2K, and openMM, we present a PLUMED-based implementation, called extended-Lagrangian free energy calculation (ELF). This implementation can be used as a stand-alone gradient estimator for other CV-based sampling algorithms, such as temperature-accelerated MD (TAMD) and extended-Lagrangian metadynamics (MtD). ELF provides the end user with a convenient framework to help select the best-suited importance-sampling algorithm for a given application without any commitment to a particular MD engine.
Meta-analysis and Harmonization of Life Cycle Assessment Studies for Algae Biofuels.
Tu, Qingshi; Eckelman, Matthew; Zimmerman, Julie
2017-09-05
Algae biodiesel (BioD) and renewable diesel (RD) have been recognized as potential solutions to mitigating fossil-fuel consumption and the associated environmental issues. Life cycle assessment (LCA) has been used by many researchers to evaluate the potential environmental impacts of these algae-derived fuels, yielding a wide range of results and, in some cases, even differing on indicating whether these fuels are preferred to petroleum-derived fuels or not. This meta-analysis reviews the methodological preferences and results for energy consumption, greenhouse gas emissions, and water consumption for 54 LCA studies that considered algae BioD and RD. The significant variation in reported results can be primarily attributed to the difference in scope, assumptions, and data sources. To minimize the variation in life cycle inventory calculations, a harmonized inventory data set including both nominal and uncertainty data is calculated for each stage of the algae-derived fuel life cycle.
Remanent dose rates around the collimators of the LHC beam cleaning insertions.
Brugger, M; Roesler, S
2005-01-01
The LHC will require an extremely powerful and unprecedented collimation system. As approximately 30% of the LHC beam is lost in the cleaning insertions, these will become some of the most radioactive locations around the entire LHC ring. Thus, remanent dose rates to be expected during later repair or maintenance interventions must be considered in the design phase itself. As a consequence, the beam cleaning insertions form a unique test bed for a recently developed approach to calculate remanent dose rates. A set of simulations, different in complexity, is used in order to evaluate methods for the estimation of remanent dose rates. The scope, as well as the restrictions, of the omega-factor method are shown and compared with the explicit simulation approach. The latter is then used to calculate remanent dose rates in the beam cleaning insertions. Furthermore, a detailed example for maintenance dose planning is given.
Gunko, N V
2015-12-01
Evaluation of efficacy of the managed population transmigration from zone of obligate (compulsory) resettlement as a measure of civil protection after the Chernobyl NPP accident from the perspective of radiation biology. Legislative and statutory tutorial documents that regulate the managed population shift from radiologically contaminated territories of Ukraine and data from the Ukrainian State Service of Statistics on time limits and scopes of population transmigration from contaminated settlements were the informational back ground of the study. Data on retrospective and expected/anticipated radiation doses in population of settlements exposed to radiological contamination in Ukraine after the Chernobyl disaster summarized for the 1986-1997 peri od and up to 2055 were the information source for calculation of averted doses due to population shift. Battery of basic research empirical evidence review methods was applied under the calculation, systemic, and biomedical approach. Population shift from zone of obligate (compulsore) resettlement (hereafter referred to as Zone 2) to stop the radiation exposure as a tool of civil protection from emergency ionizing radiation after the Chernobyl NPP accident was scientifically substantiated and expedient from the perspective of radiation biology. Estimability of a managed population shift from "dose effect" perspective and "benefit/harm" principle is worse because of data absence on individual radiation doses to migrants in the country. Public shift in 1990 and 1991 was most effective from the viewpoint of level of averted lifetime dose. Due to transmigration the averted lifetime dose to the most vulnerable group of the Chernobyl disaster survivors i.e. children aged 0 years varied from 11.2 to 28.8 mSv (calculated for the Perejizdiv village council of Zhytomyr province). Since 2000 there was almost no public shift being not accomplished in the scheduled scope. Delay and incompleteness of transmigration have diminished the efficacy of this measure in the framework of radiological protection of population. N. V. Gunko.
1982-04-23
explained the limited scope of his study and in doing so shed some light on the current thought concerning Schofield’s later career: Viewing the Civil War...war became the preoc- cupation of military forces in times of peace. In light of these developments, officership became universally recog- nized as a...calculated method of combat leadership reflected his urbane , sophisticated personality. While he did not achieve the spectacular results that commanders
Dynamic stability of maglev systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Chen, S.S.; Mulcahy, T.M.
1994-05-01
Because dynamic instabilities are not acceptable in any commercial maglev system, it is important to consider dynamic instability in the development of all maglev systems. This study considers the stability of maglev systems based on experimental data, scoping calculations, and simple mathematical models. Divergence and flutter are obtained for coupled vibration of a three-degree-of-freedom maglev vehicle on a guideway consisting of double L-shaped aluminum segments. The theory and analysis developed in this study provides basic stability characteristics and identifies future research needs for maglev systems.
The development and present status of the SOP model of associative learning.
Vogel, Edgar H; Ponce, Fernando P; Wagner, Allan R
2018-05-01
The Sometimes Opponent Processes (SOP) model in its original form was especially calculated to address how expected unconditioned stimulus (US) and conditioned stimulus (CS) are rendered less effective than their novel counterparts in Pavlovian conditioning. Its several elaborations embracing the essential notion have extended the scope of the model to integrate a much greater number of phenomena of Pavlovian conditioning. Here, we trace the development of the model and add further thoughts about its extension and refinement.
Scoping Study on DRDC Toronto Future Research Regarding Naval Mine Countermeasures
2012-06-01
personnel participating in the exercise also contributed additional information about non-observed deficiencies, in the areas of: g ) effects of...télépilotés sous-marins; d) les communications sous-marines; e) C2 communications; et f) un logiciel pour la planification et le calcul du risque. En outre...additionnels au sujet des lacunes non-observées dans les domaines suivants : g ) les répercussions des explosions sous-marines sur les plongeurs; h) les
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkham, Harold
2012-03-31
NERC has proposed a standard to use to specify clearances between vegetation and power lines. The purpose of the rule is to reduce the probability of flashover to a calculably low level. This report was commissioned by FERC’s Office of Electrical Reliability. The scope of the study was analysis of the mathematics and documentation of the technical justification behind the application of the Gallet equation and the assumptions used in the technical reference paper
Aerodynamic Drag Scoping Work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voskuilen, Tyler; Erickson, Lindsay Crowl; Knaus, Robert C.
This memo summarizes the aerodynamic drag scoping work done for Goodyear in early FY18. The work is to evaluate the feasibility of using Sierra/Low-Mach (Fuego) for drag predictions of rolling tires, particularly focused on the effects of tire features such as lettering, sidewall geometry, rim geometry, and interaction with the vehicle body. The work is broken into two parts. Part 1 consisted of investigation of a canonical validation problem (turbulent flow over a cylinder) using existing tools with different meshes and turbulence models. Part 2 involved calculating drag differences over plate geometries with simple features (ridges and grooves) defined bymore » Goodyear of approximately the size of interest for a tire. The results of part 1 show the level of noise to be expected in a drag calculation and highlight the sensitivity of absolute predictions to model parameters such as mesh size and turbulence model. There is 20-30% noise in the experimental measurements on the canonical cylinder problem, and a similar level of variation between different meshes and turbulence models. Part 2 shows that there is a notable difference in the predicted drag on the sample plate geometries, however, the computational cost of extending the LES model to a full tire would be significant. This cost could be reduced by implementation of more sophisticated wall and turbulence models (e.g. detached eddy simulations - DES) and by focusing the mesh refinement on feature subsets with the goal of comparing configurations rather than absolute predictivity for the whole tire.« less
SF-36 total score as a single measure of health-related quality of life: Scoping review
Lins, Liliane; Carvalho, Fernando Martins
2016-01-01
According to the 36-Item Short Form Health Survey questionnaire developers, a global measure of health-related quality of life such as the “SF-36 Total/Global/Overall Score” cannot be generated from the questionnaire. However, studies keep on reporting such measure. This study aimed to evaluate the frequency and to describe some characteristics of articles reporting the SF-36 Total/Global/Overall Score in the scientific literature. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses method was adapted to a scoping review. We performed searches in PubMed, Web of Science, SCOPUS, BVS, and Cochrane Library databases for articles using such scores. We found 172 articles published between 1997 and 2015; 110 (64.0%) of them were published from 2010 onwards; 30.0% appeared in journals with Impact Factor 3.00 or greater. Overall, 129 (75.0%) out of the 172 studies did not specify the method for calculating the “SF-36 Total Score”; 13 studies did not specify their methods but referred to the SF-36 developers’ studies or others; and 30 articles used different strategies for calculating such score, the most frequent being arithmetic averaging of the eight SF-36 domains scores. We concluded that the “SF-36 Total/Global/Overall Score” has been increasingly reported in the scientific literature. Researchers should be aware of this procedure and of its possible impacts upon human health. PMID:27757230
DIDEM - An integrated model for comparative health damage costs calculation of air pollution
NASA Astrophysics Data System (ADS)
Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara
2018-01-01
Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.
Stevensson, Baltzar; Edén, Mattias
2011-03-28
We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.
Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang
2015-01-01
Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
NASA Astrophysics Data System (ADS)
Nowak, Bernard; Życzkowski, Piotr; Łuczak, Rafał
2017-03-01
The authors of this article dealt with the issue of modeling the thermodynamic and thermokinetic properties (parameters) of refrigerants. The knowledge of these parameters is essential to design refrigeration equipment, to perform their energy efficiency analysis, or to compare the efficiency of air refrigerators using different refrigerants. One of the refrigerants used in mine air compression refrigerators is R407C. For this refrigerant, 23 dependencies were developed, determining its thermodynamic and thermokinetic parameters in the states of saturated liquid, dry saturated vapour, superheated vapor, subcooled liquid, and in the two-phase region. The created formulas have been presented in Tables 2, 5, 8, 10 and 12, respectively. It should be noted that the scope of application of these formulas is wider than the range of changes of that refrigerant during the normal operation of mine refrigeration equipment. The article ends with the statistical verification of the developed dependencies. For this purpose, for each model correlation coefficients and coefficients of determination were calculated, as well as absolute and relative deviations between the given values from the program REFPROP 7 (Lemmon et al., 2002) and the calculated ones. The results of these calculations have been contained in Tables 14 and 15.
NASA Astrophysics Data System (ADS)
Mosunova, N. A.
2018-05-01
The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.
Abrahamsson, Sara; McQuilken, Molly; Mehta, Shalin B.; Verma, Amitabh; Larsch, Johannes; Ilic, Rob; Heintzmann, Rainer; Bargmann, Cornelia I.; Gladfelter, Amy S.; Oldenbourg, Rudolf
2015-01-01
We have developed an imaging system for 3D time-lapse polarization microscopy of living biological samples. Polarization imaging reveals the position, alignment and orientation of submicroscopic features in label-free as well as fluorescently labeled specimens. Optical anisotropies are calculated from a series of images where the sample is illuminated by light of different polarization states. Due to the number of images necessary to collect both multiple polarization states and multiple focal planes, 3D polarization imaging is most often prohibitively slow. Our MF-PolScope system employs multifocus optics to form an instantaneous 3D image of up to 25 simultaneous focal-planes. We describe this optical system and show examples of 3D multi-focus polarization imaging of biological samples, including a protein assembly study in budding yeast cells. PMID:25837112
Clustering on Magnesium Surfaces - Formation and Diffusion Energies.
Chu, Haijian; Huang, Hanchen; Wang, Jian
2017-07-12
The formation and diffusion energies of atomic clusters on Mg surfaces determine the surface roughness and formation of faulted structure, which in turn affect the mechanical deformation of Mg. This paper reports first principles density function theory (DFT) based quantum mechanics calculation results of atomic clustering on the low energy surfaces {0001} and [Formula: see text]. In parallel, molecular statics calculations serve to test the validity of two interatomic potentials and to extend the scope of the DFT studies. On a {0001} surface, a compact cluster consisting of few than three atoms energetically prefers a face-centered-cubic stacking, to serve as a nucleus of stacking fault. On a [Formula: see text], clusters of any size always prefer hexagonal-close-packed stacking. Adatom diffusion on surface [Formula: see text] is high anisotropic while isotropic on surface (0001). Three-dimensional Ehrlich-Schwoebel barriers converge as the step height is three atomic layers or thicker. Adatom diffusion along steps is via hopping mechanism, and that down steps is via exchange mechanism.
Doubleday, Charles; Armas, Randy; Walker, Dana; Cosgriff, Christopher V; Greer, Edyta M
2017-10-09
Multidimensional tunneling calculations are carried out for 13 reactions, to test the scope of heavy-atom tunneling in organic chemistry, and to check the accuracy of one-dimensional tunneling models. The reactions include pericyclic, cycloaromatization, radical cyclization and ring opening, and S N 2. When compared at the temperatures that give the same effective rate constant of 3×10 -5 s -1 , tunneling accounts for 25-95 % of the rate in 8 of the 13 reactions. Values of transmission coefficients predicted by Bell's formula, κ Bell , agree well with multidimensional tunneling (canonical variational transition state theory with small curvature tunneling), κ SCT . Mean unsigned deviations of κ Bell vs. κ SCT are 0.08, 0.04, 0.02 at 250, 300 and 400 K. This suggests that κ Bell is a useful first choice for predicting transmission coefficients in heavy-atom tunnelling. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Variationally Optimized Free-Energy Flooding for Rate Calculation.
McCarty, James; Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele
2015-08-14
We propose a new method to obtain kinetic properties of infrequent events from molecular dynamics simulation. The procedure employs a recently introduced variational approach [Valsson and Parrinello, Phys. Rev. Lett. 113, 090601 (2014)] to construct a bias potential as a function of several collective variables that is designed to flood the associated free energy surface up to a predefined level. The resulting bias potential effectively accelerates transitions between metastable free energy minima while ensuring bias-free transition states, thus allowing accurate kinetic rates to be obtained. We test the method on a few illustrative systems for which we obtain an order of magnitude improvement in efficiency relative to previous approaches and several orders of magnitude relative to unbiased molecular dynamics. We expect an even larger improvement in more complex systems. This and the ability of the variational approach to deal efficiently with a large number of collective variables will greatly enhance the scope of these calculations. This work is a vindication of the potential that the variational principle has if applied in innovative ways.
Sun, Xiang; Li, Xinyao; Song, Song; Zhu, Yuchao; Liang, Yu-Feng; Jiao, Ning
2015-05-13
An efficient Mn-catalyzed aerobic oxidative hydroxyazidation of olefins for synthesis of β-azido alcohols has been developed. The aerobic oxidative generation of azido radical employing air as the terminal oxidant is disclosed as the key process for this transformation. The reaction is appreciated by its broad substrate scope, inexpensive Mn-catalyst, high efficiency, easy operation under air, and mild conditions at room temperature. This chemistry provides a novel approach to high value-added β-azido alcohols, which are useful precursors of aziridines, β-amino alcohols, and other important N- and O-containing heterocyclic compounds. This chemistry also provides an unexpected approach to azido substituted cyclic peroxy alcohol esters. A DFT calculation indicates that Mn catalyst plays key dual roles as an efficient catalyst for the generation of azido radical and a stabilizer for peroxyl radical intermediate. Further calculation reasonably explains the proposed mechanism for the control of C-C bond cleavage or for the formation of β-azido alcohols.
Higher curvature self-interaction corrections to Hawking radiation
NASA Astrophysics Data System (ADS)
Fairoos, C.; Sarkar, Sudipta; Yogendran, K. P.
2017-07-01
The purely thermal nature of Hawking radiation from evaporating black holes leads to the information loss paradox. A possible route to its resolution could be if (enough) correlations are shown to be present in the radiation emitted from evaporating black holes. A reanalysis of Hawking's derivation including the effects of self-interactions in general relativity shows that the emitted radiation does deviate from pure thermality; however no correlations exist between successively emitted Hawking quanta. We extend the calculations to Einstein-Gauss-Bonnet gravity and investigate if higher curvature corrections to the action lead to some new correlations in the Hawking spectra. The effective trajectory of a massless shell is determined by solving the constraint equations and the semiclassical tunneling probability is calculated. As in the case of general relativity, the radiation is no longer thermal and there is no correlation between successive emissions. The absence of any extra correlations in the emitted radiations even in Gauss-Bonnet gravity suggests that the resolution of the paradox is beyond the scope of semiclassical gravity.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Dynamic stability of maglev systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Chen, S.S.; Mulcahy, T.M.
1992-04-01
Because dynamic instability is not acceptable for any commercial maglev systems, it is important to consider this phenomenon in the development of all maglev systems. This study considers the stability of maglev systems based on experimental data, scoping calculations, and simple mathematical models. Divergence and flutter are obtained for coupled vibration of a three-degree-of-freedom maglev vehicle on a guideway consisting of double L-shaped aluminum segments attached to a rotating wheel. The theory and analysis developed in this study identifies basic stability characteristics and future research needs of maglev systems.
Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.
NASA Astrophysics Data System (ADS)
Gavazza, Sergio
Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.
NASA Astrophysics Data System (ADS)
Fekete, Tamás
2018-05-01
Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.
NASA Astrophysics Data System (ADS)
Maspero, Matteo; van den Berg, Cornelis A. T.; Landry, Guillaume; Belka, Claus; Parodi, Katia; Seevinck, Peter R.; Raaymakers, Bas W.; Kurz, Christopher
2017-12-01
A magnetic resonance (MR)-only radiotherapy workflow can reduce cost, radiation exposure and uncertainties introduced by CT-MRI registration. A crucial prerequisite is generating the so called pseudo-CT (pCT) images for accurate dose calculation and planning. Many pCT generation methods have been proposed in the scope of photon radiotherapy. This work aims at verifying for the first time whether a commercially available photon-oriented pCT generation method can be employed for accurate intensity-modulated proton therapy (IMPT) dose calculation. A retrospective study was conducted on ten prostate cancer patients. For pCT generation from MR images, a commercial solution for creating bulk-assigned pCTs, called MR for Attenuation Correction (MRCAT), was employed. The assigned pseudo-Hounsfield Unit (HU) values were adapted to yield an increased agreement to the reference CT in terms of proton range. Internal air cavities were copied from the CT to minimise inter-scan differences. CT- and MRCAT-based dose calculations for opposing beam IMPT plans were compared by gamma analysis and evaluation of clinically relevant target and organ at risk dose volume histogram (DVH) parameters. The proton range in beam’s eye view (BEV) was compared using single field uniform dose (SFUD) plans. On average, a (2%, 2 mm) gamma pass rate of 98.4% was obtained using a 10% dose threshold after adaptation of the pseudo-HU values. Mean differences between CT- and MRCAT-based dose in the DVH parameters were below 1 Gy (<1.5% ). The median proton range difference was 0.1 mm, with on average 96% of all BEV dose profiles showing a range agreement better than 3 mm. Results suggest that accurate MR-based proton dose calculation using an automatic commercial bulk-assignment pCT generation method, originally designed for photon radiotherapy, is feasible following adaptation of the assigned pseudo-HU values.
MO-F-204-00: Preparing for the ABR Diagnostic and Nuclear Medical Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szczykutowicz, T.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenney, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-00: Preparing for the ABR Diagnostic and Nuclear Medicine Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simiele, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
NASA Astrophysics Data System (ADS)
Jin, Di; Wong, Dennis; Li, Junxiang; Luo, Zhang; Guo, Yiran; Liu, Bifeng; Wu, Qiong; Ho, Chih-Ming; Fei, Peng
2015-12-01
Imaging of live cells in a region of interest is essential to life science research. Unlike the traditional way that mounts CO2 incubator onto a bulky microscope for observation, here we propose a wireless microscope (termed w-SCOPE) that is based on the “microscope-in-incubator” concept and can be easily housed into a standard CO2 incubator for prolonged on-site observation of the cells. The w-SCOPE is capable of tunable magnification, remote control and wireless image transmission. At the same time, it is compact, measuring only ~10 cm in each dimension, and cost-effective. With the enhancement of compressive sensing computation, the acquired images can achieve a wide field of view (FOV) of ~113 mm2 as well as a cellular resolution of ~3 μm, which enables various forms of follow-up image-based cell analysis. We performed 12 hours time-lapse study on paclitaxel-treated MCF-7 and HEK293T cell lines using w-SCOPE. The analytic results, such as the calculated viability and therapeutic window, from our device were validated by standard cell detection assays and imaging-based cytometer. In addition to those end-point detection methods, w-SCOPE further uncovered the time course of the cell’s response to the drug treatment over the whole period of drug exposure.
Jin, Di; Wong, Dennis; Li, Junxiang; Luo, Zhang; Guo, Yiran; Liu, Bifeng; Wu, Qiong; Ho, Chih-Ming; Fei, Peng
2015-01-01
Imaging of live cells in a region of interest is essential to life science research. Unlike the traditional way that mounts CO2 incubator onto a bulky microscope for observation, here we propose a wireless microscope (termed w-SCOPE) that is based on the “microscope-in-incubator” concept and can be easily housed into a standard CO2 incubator for prolonged on-site observation of the cells. The w-SCOPE is capable of tunable magnification, remote control and wireless image transmission. At the same time, it is compact, measuring only ~10 cm in each dimension, and cost-effective. With the enhancement of compressive sensing computation, the acquired images can achieve a wide field of view (FOV) of ~113 mm2 as well as a cellular resolution of ~3 μm, which enables various forms of follow-up image-based cell analysis. We performed 12 hours time-lapse study on paclitaxel-treated MCF-7 and HEK293T cell lines using w-SCOPE. The analytic results, such as the calculated viability and therapeutic window, from our device were validated by standard cell detection assays and imaging-based cytometer. In addition to those end-point detection methods, w-SCOPE further uncovered the time course of the cell’s response to the drug treatment over the whole period of drug exposure. PMID:26681552
The added value of thorough economic evaluation of telemedicine networks.
Le Goff-Pronost, Myriam; Sicotte, Claude
2010-02-01
This paper proposes a thorough framework for the economic evaluation of telemedicine networks. A standard cost analysis methodology was used as the initial base, similar to the evaluation method currently being applied to telemedicine, and to which we suggest adding subsequent stages that enhance the scope and sophistication of the analytical methodology. We completed the methodology with a longitudinal and stakeholder analysis, followed by the calculation of a break-even threshold, a calculation of the economic outcome based on net present value (NPV), an estimate of the social gain through external effects, and an assessment of the probability of social benefits. In order to illustrate the advantages, constraints and limitations of the proposed framework, we tested it in a paediatric cardiology tele-expertise network. The results demonstrate that the project threshold was not reached after the 4 years of the study. Also, the calculation of the project's NPV remained negative. However, the additional analytical steps of the proposed framework allowed us to highlight alternatives that can make this service economically viable. These included: use over an extended period of time, extending the network to other telemedicine specialties, or including it in the services offered by other community hospitals. In sum, the results presented here demonstrate the usefulness of an economic evaluation framework as a way of offering decision makers the tools they need to make comprehensive evaluations of telemedicine networks.
Radiological assessment. A textbook on environmental dose analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Till, J.E.; Meyer, H.R.
1983-09-01
Radiological assessment is the quantitative process of estimating the consequences to humans resulting from the release of radionuclides to the biosphere. It is a multidisciplinary subject requiring the expertise of a number of individuals in order to predict source terms, describe environmental transport, calculate internal and external dose, and extrapolate dose to health effects. Up to this time there has been available no comprehensive book describing, on a uniform and comprehensive level, the techniques and models used in radiological assessment. Radiological Assessment is based on material presented at the 1980 Health Physics Society Summer School held in Seattle, Washington. Themore » material has been expanded and edited to make it comprehensive in scope and useful as a text. Topics covered include (1) source terms for nuclear facilities and Medical and Industrial sites; (2) transport of radionuclides in the atmosphere; (3) transport of radionuclides in surface waters; (4) transport of radionuclides in groundwater; (5) terrestrial and aquatic food chain pathways; (6) reference man; a system for internal dose calculations; (7) internal dosimetry; (8) external dosimetry; (9) models for special-case radionuclides; (10) calculation of health effects in irradiated populations; (11) evaluation of uncertainties in environmental radiological assessment models; (12) regulatory standards for environmental releases of radionuclides; (13) development of computer codes for radiological assessment; and (14) assessment of accidental releases of radionuclides.« less
Elastic/Inelastic Measurement Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, Steven; Hicks, Sally; Vanhoy, Jeffrey
2016-03-01
The work scope involves the measurement of neutron scattering from natural sodium ( 23Na) and two isotopes of iron, 56Fe and 54Fe. Angular distributions, i.e., differential cross sections, of the scattered neutrons will be measured for 5 to 10 incident neutron energies per year. The work of the first year concentrates on 23Na, while the enriched iron samples are procured. Differential neutron scattering cross sections provide information to guide nuclear reaction model calculations in the low-energy (few MeV) fast-neutron region. This region lies just above the isolated resonance region, which in general is well studied; however, model calculations are difficultmore » in this region because overlapping resonance structure is evident and direct nuclear reactions are becoming important. The standard optical model treatment exhibits good predictive ability for the wide-region average cross sections but cannot treat the overlapping resonance features. In addition, models that do predict the direct reaction component must be guided by measurements to describe correctly the strength of the direct component, e.g., β 2 must be known to describe the direct component of the scattering to the first excited state. Measurements of the elastic scattering differential cross sections guide the optical model calculations, while inelastic differential cross sections provide the crucial information for correctly describing the direct component. Activities occurring during the performance period are described.« less
Bircher, Martin P; Rothlisberger, Ursula
2018-06-12
Linear-response time-dependent density functional theory (LR-TD-DFT) has become a valuable tool in the calculation of excited states of molecules of various sizes. However, standard generalized-gradient approximation and hybrid exchange-correlation (xc) functionals often fail to correctly predict charge-transfer (CT) excitations with low orbital overlap, thus limiting the scope of the method. The Coulomb-attenuation method (CAM) in the form of the CAM-B3LYP functional has been shown to reliably remedy this problem in many CT systems, making accurate predictions possible. However, in spite of a rather consistent performance across different orbital overlap regimes, some pitfalls remain. Here, we present a fully flexible and adaptable implementation of the CAM for Γ-point calculations within the plane-wave pseudopotential molecular dynamics package CPMD and explore how customized xc functionals can improve the optical spectra of some notorious cases. We find that results obtained using plane waves agree well with those from all-electron calculations employing atom-centered bases, and that it is possible to construct a new Coulomb-attenuated xc functional based on simple considerations. We show that such a functional is able to outperform CAM-B3LYP in some cases, while retaining similar accuracy in systems where CAM-B3LYP performs well.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
Numerical Calculation of the Peaking Factor of a Water-Cooled W/Cu Monoblock for a Divertor
NASA Astrophysics Data System (ADS)
Han, Le; Chang, Haiping; Zhang, Jingyang; Xu, Tiejun
2015-09-01
In order to accurately predict the incident critical heat flux (ICHF, the heat flux at the heated surface when CHF occurs) of a water-cooled W/Cu monoblock for a divertor, the exact knowledge of its peaking factors (fp) under one-sided heating conditions with different design parameters is a key issue. In this paper, the heat conduction in the solid domain of a water-cooled W/Cu monoblock is calculated numerically by assuming the local heat transfer coefficients (HTC) of the cooling wall to be functions of the local wall temperature, so as to obtain fp. The reliability of the calculation method is validated by an experimental example result, with the maximum error of 2.1% only. The effects of geometric and flow parameters on the fp of a water-cooled W/Cu monoblock are investigated. Within the scope of this study, it is shown that the fp increases with increasing dimensionless W/Cu monoblock width and armour thickness (the shortest distance between the heated surface and Cu layer), and the maximum increases are 43.8% and 22.4% respectively. The dimensionless W/Cu monoblock height and Cu thickness have little effect on fp. The increase of Reynolds number and Jakob number causes the increase of fp, and the maximum increases are 6.8% and 9.6% respectively. Based on the calculated results, an empirical correlation on peaking factor is obtained via regression. These results provide a valuable reference for the thermal-hydraulic design of water-cooled divertors. supported by National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and Funding of Jiangsu Innovation Program for Graduate Education, China (CXLX12_0170), the Fundamental Research Funds for the Central Universities of China
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-03
In March 1995, Affiliated Engineers SE, Inc. (AESE) was retained by the Mobile District U.S. Army Corps of Engineers to perform a Limited Energy Study for Holston Army Ammunition Plant, Kingsport, Tennessee. The field survey of existing conditions was completed in May 1995. The results of this field survey were subsequently tabulated and used to generate single line building drawings on Autocad. This report summarizes the results obtained from this field investigation and the analysis of various alternative Energy Conservation Opportunities (ECO`s). To develop the field data into various alternative ECO concepts or models, we utilized an `Excel` spreadsheet tomore » tabulate and compare energy consumption, installation and operating costs for various ECO`s. These ECO`s were then analyzed for suitability for the Energy Conservation Investment Program (ECIP) using the government`s software package called Life Cycle Cost in Design (LCCID). The Scope of Work developed by the U.S. Army Corps of Engineers gave the following tasks: (1) Perform a field survey to gather information on existing operating conditions and equipment at Hoiston Army Ammunition Plant, Area `A`. (2) Perform a field survey to gather information on existing boilers laid away at Volunteer Army Ammunition Plant in Chattanooga, Tennessee. (3) Provide a list of suggested ECO`s. (4) Analyze ECO`s using the LCCID program. (5) Perform savings to investment ratio (SIR) calculation. (6) Rank ECO`s per SIR`s. (7) Provide information on study assumptions and document equations used in calculations. (8) Perform Life Cycle Cost Analysis. (9) Perform Synergism Analysis. (10) Calculate Energy/Cost Ratios. (11) Calculate Benefit/Cost Ratios. (12) Provide documentation in the form of Project Development Brochures (PDB`s) and DD Form 139« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin; Qiao, Weiye; Li, Yuliang
The structure stabilities and electronic properties are investigated by using ab initio self-consistent-field crystal orbital method based on density functional theory for the one-dimensional (1D) double-wall nanotubes made of n-gon SiO{sub 2} nanotubes encapsulated inside zigzag carbon nanotubes. It is found that formation of the combined systems is energetically favorable when the distance between the two constituents is around the Van der Waals scope. The obtained band structures show that all the combined systems are semiconductors with nonzero energy gaps. The frontier energy bands (the highest occupied band and the lowest unoccupied band) of double-wall nanotubes are mainly derived frommore » the corresponding carbon nanotubes. The mobilities of charge carriers are calculated to be within the range of 10{sup 2}–10{sup 4} cm{sup 2} V{sup −1} s{sup −1} for the hybrid double-wall nanotubes. Young’s moduli are also calculated for the combined systems. For the comparison, geometrical and electronic properties of n-gon SiO{sub 2} nanotubes are also calculated and discussed. - Graphical abstract: Structures and band structures of the optimum 1D Double walls nanotubes. The optimized structures are 3-gon SiO2@(15,0), 5-gon SiO2@(17,0), 6-gon SiO2@(18,0) and 7-gon SiO2@(19,0). - Highlights: • The structure and electronic properties of the 1D n-gon SiO{sub 2}@(m,0)s are studied using SCF-CO method. • The encapsulation of 1D n-gon SiO{sub 2} tubes inside zigzag carbon nanotubes can be energetically favorable. • The 1D n-gon SiO{sub 2}@(m,0)s are all semiconductors. • The mobility of charge carriers and Young’s moduli are calculated.« less
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2011-01-01
The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.
HyPEP FY06 Report: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE report
2006-09-01
The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less
g-factor calculations from the generalized seniority approach
NASA Astrophysics Data System (ADS)
Maheshwari, Bhoomika; Jain, Ashok Kumar
2018-05-01
The generalized seniority approach proposed by us to understand the B(E1)/B(E2)/B(E3) properties of semi-magic nuclei has been widely successful in the explanation of the same and has led to an expansion in the scope of seniority isomers. In the present paper, we apply the generalized seniority scheme to understand the behavior of g-factors in semi-magic nuclei. We find that the magnetic moment and the gfactors do show a particle number independent behavior as expected and the understanding is consistent with the explanation of transition probabilities.
Dynamic stability of maglev systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Chen, S.S.; Mulcahy, T.M.
1992-09-01
Since the occurrence of dynamic instabilities is not acceptable for any commercial maglev systems, it is important to consider the dynamic instability in the development of all maglev systems. This study is to consider the stability of maglev systems based on experimental data, scoping calculations and simple mathematical models. Divergence and flutter are obtained for coupled vibration of a three-degree-of-freedom maglev vehicle on the guideway which consists of double L-shaped aluminum segments attached to a rotating wheel. The theory and analysis developed in this study provides basic stability characteristics and identifies future research needs for maglev system.
The use of relative incidence ratios in self-controlled case series studies: an overview.
Hawken, Steven; Potter, Beth K; Little, Julian; Benchimol, Eric I; Mahmud, Salah; Ducharme, Robin; Wilson, Kumanan
2016-09-23
The self-controlled case series (SCCS) is a useful design for investigating associations between outcomes and transient exposures. The SCCS design controls for all fixed covariates, but effect modification can still occur. This can be evaluated by including interaction terms in the model which, when exponentiated, can be interpreted as a relative incidence ratio (RIR): the change in relative incidence (RI) for a unit change in an effect modifier. We conducted a scoping review to investigate the use of RIRs in published primary SCCS studies, and conducted a case-study in one of our own primary SCCS studies to illustrate the use of RIRs within an SCCS analysis to investigate subgroup effects in the context of comparing whole cell (wcp) and acellular (acp) pertussis vaccines. Using this case study, we also illustrated the potential utility of RIRs in addressing the healthy vaccinee effect (HVE) in vaccine safety surveillance studies. Our scoping review identified 122 primary studies reporting an SCCS analysis. Of these, 24 described the use of interaction terms to test for effect modification. 21 of 24 studies reported stratum specific RIs, 22 of 24 reported the p-value for interaction, and less than half (10 of 24) reported the estimate of the interaction term/RIR, the stratum specific RIs and interaction p-values. Our case-study demonstrated that there was a nearly two-fold greater RI of ER visits and admissions following wcp vaccination relative to acp vaccination (RIR = 1.82, 95 % CI 1.64-2.01), where RI estimates in each subgroup were clearly impacted by a strong healthy vaccinee effect. We demonstrated in our scoping review that calculating RIRs is not a widely utilized strategy. We showed that calculating RIRs across time periods is useful for the detection of relative changes in adverse event rates that might otherwise be missed due to the HVE. Many published studies of vaccine-associated adverse events could have missed/underestimated important safety signals masked by the HVE. With further development, our application of RIRs could be an important tool to address the HVE, particularly in the context of self-controlled study designs.
Improvements in Modeling Au Sphere Non-LTE X-ray Emission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosen, M D; Scott, H A; Suter, L J
2008-10-30
We've previously reported on experiments at the Omega laser at URLLE, in which 1.0 mm in diameter, Au coated, spheres, were illuminated at either 10{sup 14} W/cm{sup 2} (10 kJ/3 ns) or at 10{sup 15} W/cm{sup 2} (30 kJ/1 ns). Spectral information on the 1 keV thermal x-rays, as well as the multi-keV M-band were obtained. We compared a variety of non-LTE atomic physics packages to this data with varying degrees of success. In this paper we broaden the scope of the investigation, and compare the data to newer models: (1) An improved Detailed Configuration Accounting (DCA) method; and (2)more » This model involves adjustments to the standard XSN non-LTE model which lead to a better match of coronal emission as calculated by XSN to that calculated by SCRAM, a more sophisticated stand-alone model. We show some improvements in the agreement with Omega data when using either of these new approaches.« less
Evaluation of solvation free energies for small molecules with the AMOEBA polarizable force field
Mohamed, Noor Asidah; Bradshaw, Richard T.
2016-01-01
The effects of electronic polarization in biomolecular interactions will differ depending on the local dielectric constant of the environment, such as in solvent, DNA, proteins, and membranes. Here the performance of the AMOEBA polarizable force field is evaluated under nonaqueous conditions by calculating the solvation free energies of small molecules in four common organic solvents. Results are compared with experimental data and equivalent simulations performed with the GAFF pairwise‐additive force field. Although AMOEBA results give mean errors close to “chemical accuracy,” GAFF performs surprisingly well, with statistically significantly more accurate results than AMOEBA in some solvents. However, for both models, free energies calculated in chloroform show worst agreement to experiment and individual solutes are consistently poor performers, suggesting non‐potential‐specific errors also contribute to inaccuracy. Scope for the improvement of both potentials remains limited by the lack of high quality experimental data across multiple solvents, particularly those of high dielectric constant. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27757978
Clustering on Magnesium Surfaces – Formation and Diffusion Energies
Chu, Haijian; Huang, Hanchen; Wang, Jian
2017-07-12
The formation and diffusion energies of atomic clusters on Mg surfaces determine the surface roughness and formation of faulted structure, which in turn affect the mechanical deformation of Mg. This paper reports first principles density function theory (DFT) based quantum mechanics calculation results of atomic clustering on the low energy surfaces {0001} and {more » $$\\bar{1}$$011} . In parallel, molecular statics calculations serve to test the validity of two interatomic potentials and to extend the scope of the DFT studies. On a {0001} surface, a compact cluster consisting of few than three atoms energetically prefers a face-centered-cubic stacking, to serve as a nucleus of stacking fault. On a {$$\\bar{1}$$011} , clusters of any size always prefer hexagonal-close-packed stacking. Adatom diffusion on surface {$$\\bar{1}$$011} is high anisotropic while isotropic on surface (0001). Three-dimensional Ehrlich–Schwoebel barriers converge as the step height is three atomic layers or thicker. FInally, adatom diffusion along steps is via hopping mechanism, and that down steps is via exchange mechanism.« less
Clustering on Magnesium Surfaces – Formation and Diffusion Energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Haijian; Huang, Hanchen; Wang, Jian
The formation and diffusion energies of atomic clusters on Mg surfaces determine the surface roughness and formation of faulted structure, which in turn affect the mechanical deformation of Mg. This paper reports first principles density function theory (DFT) based quantum mechanics calculation results of atomic clustering on the low energy surfaces {0001} and {more » $$\\bar{1}$$011} . In parallel, molecular statics calculations serve to test the validity of two interatomic potentials and to extend the scope of the DFT studies. On a {0001} surface, a compact cluster consisting of few than three atoms energetically prefers a face-centered-cubic stacking, to serve as a nucleus of stacking fault. On a {$$\\bar{1}$$011} , clusters of any size always prefer hexagonal-close-packed stacking. Adatom diffusion on surface {$$\\bar{1}$$011} is high anisotropic while isotropic on surface (0001). Three-dimensional Ehrlich–Schwoebel barriers converge as the step height is three atomic layers or thicker. FInally, adatom diffusion along steps is via hopping mechanism, and that down steps is via exchange mechanism.« less
Woo Kim, Hyun; Rhee, Young Min
2012-07-30
Recently, many polarizable force fields have been devised to describe induction effects between molecules. In popular polarizable models based on induced dipole moments, atomic polarizabilities are the essential parameters and should be derived carefully. Here, we present a parameterization scheme for atomic polarizabilities using a minimization target function containing both molecular and atomic information. The main idea is to adopt reference data only from quantum chemical calculations, to perform atomic polarizability parameterizations even when relevant experimental data are scarce as in the case of electronically excited molecules. Specifically, our scheme assigns the atomic polarizabilities of any given molecule in such a way that its molecular polarizability tensor is well reproduced. We show that our scheme successfully works for various molecules in mimicking dipole responses not only in ground states but also in valence excited states. The electrostatic potential around a molecule with an externally perturbing nearby charge also exhibits a near-quantitative agreement with the reference data from quantum chemical calculations. The limitation of the model with isotropic atoms is also discussed to examine the scope of its applicability. Copyright © 2012 Wiley Periodicals, Inc.
Martínez-Rodríguez, Luis; Bandeira, Nuno A G; Bo, Carles; Kleij, Arjan W
2015-05-04
A calix[4]arene host equipped with two bis-[Zn(salphen)] complexes self-assembles into a capsular complex in the presence of a chiral diamine guest with an unexpected 2:1 ratio between the host and the guest. Effective chirality transfer from the diamine to the calix-salen hybrid host is observed by circular dichroism (CD) spectroscopy, and a high stability constant K2,1 of 1.59×10(11) M(-2) for the assembled host-guest ensemble has been determined with a substantial cooperativity factor α of 6.4. Density functional calculations are used to investigate the origin of the stability of the host-guest system and the experimental CD spectrum compared with those calculated for both possible diastereoisomers showing that the M,M isomer is the one that is preferentially formed. The current system holds promise for the chirality determination of diamines, as evidenced by the investigated substrate scope and the linear relationship between the ee of the diamine and the amplitude of the observed Cotton effects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yao, Yanyan; Jiang, Tao; Zhang, Limin; Chen, Xiangyu; Gao, Zhenliang; Wang, Zhong Lin
2016-08-24
Ocean waves are one of the most promising renewable energy sources for large-scope applications due to the abundant water resources on the earth. Triboelectric nanogenerator (TENG) technology could provide a new strategy for water wave energy harvesting. In this work, we investigated the charging characteristics of utilizing a wavy-structured TENG to charge a capacitor under direct water wave impact and under enclosed ball collision, by combination of theoretical calculations and experimental studies. The analytical equations of the charging characteristics were theoretically derived for the two cases, and they were calculated for various load capacitances, cycle numbers, and structural parameters such as compression deformation depth and ball size or mass. Under the direct water wave impact, the stored energy and maximum energy storage efficiency were found to be controlled by deformation depth, while the stored energy and maximum efficiency can be optimized by the ball size under the enclosed ball collision. Finally, the theoretical results were well verified by the experimental tests. The present work could provide strategies for improving the charging performance of TENGs toward effective water wave energy harvesting and storage.
DTM Generation with Uav Based Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Polat, N.; Uysal, M.
2017-11-01
Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work.
Design of tubesheet for U-tube heat exchangers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paliwal, D.N.; Saxena, R.M.
1993-02-01
Thorough analysis of two-side integral tubesheet of U-tube heat exchanger is carried out, using Panc's component theory of plates. Effects of solid annular rim and interaction between tubesheet and shell/channel are considered. A design procedure based on foregoing analysis is proposed. Fictive elastic constants due to Osweiller, as well as effective elastic constants due to Slot and O'Donnell, are employed. Deformations, internal forces and primary stress intensities are evaluated in both pitch and diagonal directions. Stress category concept of ASME Sect. VIII Div. 2 is used. Design thickness obtained by this method is compared with the thicknesses calculated, using ASMEmore » Sect. VIII Div. 1, TEMA and BS-5500. This method enables us to calculate stresses in shell and channel in the junction region as well. Present analysis and design procedure thoroughly investigates the tubesheet behavior and leads to a thinner tubesheet. It is concluded that though all the codes based on Gardner's work provide safe and efficient design rules, and lie on firm footing, still there is further scope for reducing the design thickness of tubesheet by about ten percent.« less
Risk-Informed External Hazards Analysis for Seismic and Flooding Phenomena for a Generic PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, Carlo; Prescott, Steve; Ma, Zhegang
This report describes the activities performed during the FY2017 for the US-DOE Light Water Reactor Sustainability Risk-Informed Safety Margin Characterization (LWRS-RISMC), Industry Application #2. The scope of Industry Application #2 is to deliver a risk-informed external hazards safety analysis for a representative nuclear power plant. Following the advancements occurred during the previous FYs (toolkits identification, models development), FY2017 focused on: increasing the level of realism of the analysis; improving the tools and the coupling methodologies. In particular the following objectives were achieved: calculation of buildings pounding and their effects on components seismic fragility; development of a SAPHIRE code PRA modelsmore » for 3-loops Westinghouse PWR; set-up of a methodology for performing static-dynamic PRA coupling between SAPHIRE and EMRALD codes; coupling RELAP5-3D/RAVEN for performing Best-Estimate Plus Uncertainty analysis and automatic limit surface search; and execute sample calculations for demonstrating the capabilities of the toolkit in performing a risk-informed external hazards safety analyses.« less
Modeling of information diffusion in Twitter-like social networks under information overload.
Li, Pei; Li, Wei; Wang, Hui; Zhang, Xin
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations.
Modeling of Information Diffusion in Twitter-Like Social Networks under Information Overload
Li, Wei
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations. PMID:24795541
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology. © 2016 The Fisheries Society of the British Isles.
Richter-Schrag, Hans-Jürgen; Glatz, Torben; Walker, Christine; Fischer, Andreas; Thimme, Robert
2016-11-07
To evaluate rebleeding, primary failure (PF) and mortality of patients in whom over-the-scope clips (OTSCs) were used as first-line and second-line endoscopic treatment (FLET, SLET) of upper and lower gastrointestinal bleeding (UGIB, LGIB). A retrospective analysis of a prospectively collected database identified all patients with UGIB and LGIB in a tertiary endoscopic referral center of the University of Freiburg, Germany, from 04-2012 to 05-2016 ( n = 93) who underwent FLET and SLET with OTSCs. The complete Rockall risk scores were calculated from patients with UGIB. The scores were categorized as < or ≥ 7 and were compared with the original Rockall data. Differences between FLET and SLET were calculated. Univariate and multivariate analysis were performed to evaluate the factors that influenced rebleeding after OTSC placement. Primary hemostasis and clinical success of bleeding lesions (without rebleeding) was achieved in 88/100 (88%) and 78/100 (78%), respectively. PF was significantly lower when OTSCs were applied as FLET compared to SLET (4.9% vs 23%, P = 0.008). In multivariate analysis, patients who had OTSC placement as SLET had a significantly higher rebleeding risk compared to those who had FLET (OR 5.3; P = 0.008). Patients with Rockall risk scores ≥ 7 had a significantly higher in-hospital mortality compared to those with scores < 7 (35% vs 10%, P = 0.034). No significant differences were observed in patients with scores < or ≥ 7 in rebleeding and rebleeding-associated mortality. Our data show for the first time that FLET with OTSC might be the best predictor to successfully prevent rebleeding of gastrointestinal bleeding compared to SLET. The type of treatment determines the success of primary hemostasis or primary failure.
Li, C; Bhatt, P P; Johnston, T P
1998-10-01
We have assessed the bioadhesive properties of several different mucoadhesive buccal patches. The patches consisted of custom coformulations of silicone polymers and Carbopol 974P. The contact angle of water was measured for each of the test formulations, using an ophthalmic shadow scope. The corresponding work of adhesion between the water and the patches (W1), and between the patches and freshly-excised rabbit buccal mucosa (W2) was then calculated, using a modification of Dupre's equation. The bioadhesive strength between the patches and excised rabbit buccal mucosa was also assessed. The results of the contact-angle measurements indicated that the contact angle decreased with an increase in the amount of Carbopol in the formulation. Additionally, the calculated values of both W1 and W2 increased with an increase in the amount of Carbopol in the buccal-patch formulations. A correlation (r not equal to 0.9808) was found between the measured contact angle and the calculated values for W2. The direct measurement of the force required to separate a buccal patch from excised rabbit buccal mucosa with the INSTRON demonstrated that the adhesive strength increased with an increase in the amount of Carbopol. This preliminary study has shown that the measurement of contact angles alone may provide a useful technique for estimating the work of adhesion, and may serve as a convenient and rapid screening procedure to identify potential mucoadhesive buccal-patch formulations.
NASA Technical Reports Server (NTRS)
Campbell, James R.; Welton, Ellsworth J.; Spinhirne, James D.; Ji, Qiang; Tsay, Si-Chee; Piketh, Stuart J.; Barenbrug, Marguerite; Holben, Brent; Starr, David OC. (Technical Monitor)
2002-01-01
During the ARREX-1999 and SAFARI-2000 Dry Season experiments a micropulse lidar (523 nm) instrument was operated at the Skukuza Airport in northeastern South Africa. The Mar was collocated with a diverse array of passive radiometric equipment. For SAFARI-2000 the processed Mar data yields a daytime time-series of layer mean/derived aerosol optical properties, including extinction-to-backscatter ratios and vertical extinction cross-section profile. Combined with 523 run aerosol optical depth and spectral Angstrom exponent calculations from available CIMEL sun-photometer data and normalized broadband flux measurements the temporal evolution of the near surface aerosol layer optical properties is analyzed for climatological trends. For the densest smoke/haze events the extinction-to-backscatter ratio is found to be between 60-80/sr, and corresponding Angstrom exponent calculations near and above 1.75. The optical characteristics of an evolving smoke event from SAFARI-2000 are extensively detailed. The advecting smoke was embedded within two distinct stratified thermodynamic layers, causing the particulate mass to advect over the instrument array in an incoherent manner on the afternoon of its occurrence. Surface broadband flux forcing due to the smoke is calculated, as is the evolution in the vertical aerosol extinction profile as measured by the Han Finally, observations of persistent elevated aerosol during ARREX-1999 are presented and discussed. The lack of corroborating observations the following year makes these observation; both unique and noteworthy in the scope of regional aerosol transport over southern Africa.
High energy density propulsion systems and small engine dynamometer
NASA Astrophysics Data System (ADS)
Hays, Thomas
2009-07-01
Scope and Method of Study. This study investigates all possible methods of powering small unmanned vehicles, provides reasoning for the propulsion system down select, and covers in detail the design and production of a dynamometer to confirm theoretical energy density calculations for small engines. Initial energy density calculations are based upon manufacturer data, pressure vessel theory, and ideal thermodynamic cycle efficiencies. Engine tests are conducted with a braking type dynamometer for constant load energy density tests, and show true energy densities in excess of 1400 WH/lb of fuel. Findings and Conclusions. Theory predicts lithium polymer, the present unmanned system energy storage device of choice, to have much lower energy densities than other conversion energy sources. Small engines designed for efficiency, instead of maximum power, would provide the most advantageous method for powering small unmanned vehicles because these engines have widely variable power output, loss of mass during flight, and generate rotational power directly. Theoretical predictions for the energy density of small engines has been verified through testing. Tested values up to 1400 WH/lb can be seen under proper operating conditions. The implementation of such a high energy density system will require a significant amount of follow-on design work to enable the engines to tolerate the higher temperatures of lean operation. Suggestions are proposed to enable a reliable, small-engine propulsion system in future work. Performance calculations show that a mature system is capable of month long flight times, and unrefueled circumnavigation of the globe.
Multiscale study of metal nanoparticles
NASA Astrophysics Data System (ADS)
Lee, Byeongchan
Extremely small structures with reduced dimensionality have emerged as a scientific motif for their interesting properties. In particular, metal nanoparticles have been identified as a fundamental material in many catalytic activities; as a consequence, a better understanding of structure-function relationship of nanoparticles has become crucial. The functional analysis of nanoparticles, reactivity for example, requires an accurate method at the electronic structure level, whereas the structural analysis to find energetically stable local minima is beyond the scope of quantum mechanical methods as the computational cost becomes prohibitingly high. The challenge is that the inherent length scale and accuracy associated with any single method hardly covers the broad scale range spanned by both structural and functional analyses. In order to address this, and effectively explore the energetics and reactivity of metal nanoparticles, a hierarchical multiscale modeling is developed, where methodologies of different length scales, i.e. first principles density functional theory, atomistic calculations, and continuum modeling, are utilized in a sequential fashion. This work has focused on identifying the essential information that bridges two different methods so that a successive use of different methods is seamless. The bond characteristics of low coordination systems have been obtained with first principles calculations, and incorporated into the atomistic simulation. This also rectifies the deficiency of conventional interatomic potentials fitted to bulk properties, and improves the accuracy of atomistic calculations for nanoparticles. For the systematic shape selection of nanoparticles, we have improved the Wulff-type construction using a semi-continuum approach, in which atomistic surface energetics and crystallinity of materials are added on to the continuum framework. The developed multiscale modeling scheme is applied to the rational design of platinum nanoparticles in the range of 2.4 nm to 3.1 nm: energetically favorable structures have been determined in terms of semi-continuum binding energy, and the reactivity of the selected nanoparticle has been investigated based on local density of states from first principles calculations. The calculation suggests that the reactivity landscape of particles is more complex than the simple reactivity of clean surfaces, and the reactivity towards a particular reactant can be predicted for a given structure.
An Individualized Risk Calculator for Research in Prodromal Psychosis
Cannon, Tyrone D.; Yu, Changhong; Addington, Jean; Bearden, Carrie E.; Cadenhead, Kristin S.; Cornblatt, Barbara A.; Heinssen, Robert; Jeffries, Clark D.; Mathalon, Daniel H.; McGlashan, Thomas H.; Perkins, Diana O.; Seidman, Larry J.; Tsuang, Ming T.; Walker, Elaine F.; Woods, Scott W.; Kattan, Michael
2016-01-01
Objective About 20–35% of individuals aged 12–30 years who meet criteria for a prodromal risk syndrome convert to psychosis within two years. However, this estimate ignores the fact that clinical high-risk (CHR) cases vary considerably in risk. Here we sought to create a risk calculator that can ascertain the probability of conversion to psychosis in individual patients based on profiles of risk indicators. The high risk category predicted by this calculator can inform research criteria going forward. Method Subjects were 596 CHR participants from the second phase of the North American Prodrome Longitudinal Study (NAPLS 2) who were followed up to the time of conversion to psychosis or last contact (up to 2 years). Our scope was limited to predictors supported by prior studies and readily obtainable in general clinical settings. Time-to-event regression was used to build a multivariate model predicting conversion, with internal validation using 1000 bootstrap resamples. Results The 2-year probability of conversion to psychosis in this sample was 16%. Higher levels of unusual thought content and suspiciousness, greater decline in social functioning, lower verbal learning and memory performance, slower speed of processing, and younger age at baseline each contributed to individual risk for psychosis, while stressful life events, traumas, and family history of schizophrenia were not significant predictors. The multivariate model achieved a Concordance index of 0.71, and was validated in an independent external dataset. The results are instantiated in a web-based risk prediction tool envisioned to be most useful in research protocols involving the psychosis prodrome. Conclusions A risk calculator comparable in accuracy to those for cardiovascular disease and cancer is available to predict individualized conversion risks in newly ascertained CHR cases. Given that the risk calculator can only be validly applied for patients who screen positive on the Structured Clinical Interview for Psychosis Risk Syndromes, which requires training to administer, it's most immediate uses will be in research on psychosis risk factors and in research driven clinical (prevention) trials. PMID:27363508
Increasing prevalence of diabetes in Bangladesh: a scoping review.
Biswas, T; Islam, A; Rawal, L B; Islam, S M S
2016-09-01
The prevalence of type 2 diabetes is increasing rapidly in Bangladesh. However, studies documenting the increasing trend of diabetes prevalence are scarce. The aim of this study was to conduct a scoping review of published literature to ascertain the changing patterns of diabetes prevalence in Bangladesh. We conducted a scoping review based on York scoping reviews framework and performed a comprehensive search of published literature through Medline, BanglaJOL, and Google Scholar published between 1994 and 2013. We summarised and calculated the time trends and pooled prevalence for type 2 diabetes among adults (≥18 years) in both urban and rural areas in Bangladesh. Of 152 studies identified, we included 22 studies for the scoping review which met the inclusion criteria. Overall, 11 studies (50%) were conducted in rural areas, eight in urban (36%) and three (14%) in semi-urban, semi-rural and tribal areas. The overall prevalence of type 2 diabetes ranged between 4.5% and 35.0%. The final estimate of diabetes prevalence obtained after pooling of data from individual studies among 51,252 participants was 7.4% (95% CI 7.2-7.7%). The prevalence of diabetes was higher in males compared to females in urban areas and vice-versa in rural areas. Analyses of exponential trend revealed an increasing trend of diabetes prevalence among urban and rural population at a rate of 0.05% (R = 0.18) and 0.06% (R = 0.35) per year, respectively. The prevalence of type 2 diabetes showed an increasing trend in both urban and rural population in Bangladesh. Our findings suggest the need for an all-out effort by the government and stakeholders to implement preventive strategies for diabetes in Bangladesh. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Yu, Wenhao
2017-01-01
Regional co-location scoping intends to identify local regions where spatial features of interest are frequently located together. Most of the previous researches in this domain are conducted on a global scale and they assume that spatial objects are embedded in a 2-D space, but the movement in urban space is actually constrained by the street network. In this paper we refine the scope of co-location patterns to 1-D paths consisting of nodes and segments. Furthermore, since the relations between spatial events are usually inversely proportional to their separation distance, the proposed method introduces the “Distance Decay Effects” to improve the result. Specifically, our approach first subdivides the street edges into continuous small linear segments. Then a value representing the local distribution intensity of events is estimated for each linear segment using the distance-decay function. Each kind of geographic feature can lead to a tessellated network with density attribute, and the generated multiple networks for the pattern of interest will be finally combined into a composite network by calculating the co-location prevalence measure values, which are based on the density variation between different features. Our experiments verify that the proposed approach is effective in urban analysis. PMID:28763496
NASA Astrophysics Data System (ADS)
Taheri, Elmira; Mirjafary, Zohreh; Saeidian, Hamid
2018-04-01
The novel hydroxymethylated 1,4-disubstituted-1,2,3-triazole-based sulfonamides were synthesized in excellent yields and high regioselectivity via a one-pot, two-step, three-component reaction of N-propargylsulfonamides, sodium azide, and epoxide derivatives under green conditions. Green and mild reaction condition, commercially readily available and inexpensive starting materials, wide scope and easy work-up are the key features of the present method. The Li+ and Na+ ion affinities of the model structure have been also investigated by density functional theory (DFT) studies to find the applicability of these products as ligand in coordination chemistry.
Air Permeability of Renovation Plasters Evaluated with Torrent’s Method
NASA Astrophysics Data System (ADS)
Brasse, Krystian; Tracz, Tomasz
2017-10-01
The aim of research was to determine the air permeability of the renovation plasters, using Torrent’s method. The scope of this research included three renovation plaster systems. Each of them was applied on experimental, masonry element and had a different rendering coat. Permeability measurements were performed after 28 days of curing in a natural state. In order to calculate the coefficient of air permeability (kT), the partial data was registered during the measurements. The test results indicate the possibility of determination the coefficient of air permeability kT in relation to the renovation plasters. At the same time results confirm the high porosity of the renovation plasters.
Commercial D-T FRC Power Plant Systems Analysis
NASA Astrophysics Data System (ADS)
Nguyen, Canh; Santarius, John; Emmert, Gilbert; Steinhauer, Loren; Stubna, Michael
1998-11-01
Results of an engineering issues scoping study of a Field-Reversed Configuration (FRC) burning D-T fuel will be presented. The study primarily focuses on engineering issues, such as tritium-breeding blanket design, radiation shielding, neutron damage, activation, safety, and environment. This presentation will concentrate on plasma physics, current drive, economics, and systems integration, which are important for the overall systems analysis. A systems code serves as the key tool in defining a reference point for detailed physics and engineering calculations plus parametric variations, and typical cases will be presented. Advantages of the cylindrical geometry and high beta (plasma pressure/magnetic-field pressure) are evident.
TOPAS/Geant4 configuration for ionization chamber calculations in proton beams
NASA Astrophysics Data System (ADS)
Wulff, Jörg; Baumann, Kilian-Simon; Verbeek, Nico; Bäumer, Christian; Timmermann, Beate; Zink, Klemens
2018-06-01
Monte Carlo (MC) calculations are a fundamental tool for the investigation of ionization chambers (ICs) in radiation fields, and for calculations in the scope of IC reference dosimetry. Geant4, as used for the toolkit TOPAS, is a major general purpose code, generally suitable for investigating ICs in primary proton beams. To provide reliable results, the impact of parameter settings and the limitations of the underlying condensed history (CH) algorithm need to be known. A Fano cavity test was implemented in Geant4 (10.03.p1) for protons, based on the existing version for electrons distributed with the Geant4 release. This self-consistent test allows the calculation to be compared with the expected result for the typical IC-like geometry of an air-filled cavity surrounded by a higher density material. Various user-selectable parameters of the CH implementation in the EMStandardOpt4 physics-list were tested for incident proton energies between 30 and 250 MeV. Using TOPAS (3.1.p1) the influence of production cuts was investigated for bare air-cavities in water, irradiated by primary protons. Detailed IC geometries for an NACP-02 plane-parallel chamber and an NE2571 Farmer-chamber were created. The overall factor f Q as a ratio between the dose-to-water and dose to the sensitive air-volume was calculated for incident proton energies between 70 and 250 MeV. The Fano test demonstrated the EMStandardOpt4 physics-list with the WentzelIV multiple scattering model as appropriate for IC calculations. If protons start perpendicular to the air cavity, no further step-size limitations are required to pass the test within 0.1%. For an isotropic source, limitations of the maximum step length within the air cavity and its surrounding as well as a limitation of the maximum fractional energy loss per step were required to pass within 0.2%. A production cut of ⩽5 μm or ∼15 keV for all particles yielded a constant result for f Q of bare air-filled cavities. The overall factor f Q for the detailed NACP-02 and NE2571 chamber models calculated with TOPAS agreed with the values of Gomà et al (2016 Phys. Med. Biol. 61 2389) within statistical uncertainties (1σ) of <0.3% for almost all energies with a maximum deviation of 0.6% at 250 MeV for the NE2571. The selection of hadronic scattering models (QGSP_BIC versus QGSP_BERT) in TOPAS impacted the results at the highest energies by 0.3% ± 0.1%. Based on the Fano cavity test, the Geant4/TOPAS Monte Carlo code, in its investigated version, can provide reliable results for IC calculations. Agreement with the detailed IC models and the published values of Gomà et al can be achieved when production cuts are reduced from the TOPAS default values. The calculations confirm the reported agreement of Gomà et al for with IAEA-TRS398 values within the given uncertainties. An additional uncertainty for the MC-calculated of ∼0.3% by hadronic interaction models should be considered.
TOPAS/Geant4 configuration for ionization chamber calculations in proton beams.
Wulff, Jörg; Baumann, Kilian-Simon; Verbeek, Nico; Bäumer, Christian; Timmermann, Beate; Zink, Klemens
2018-06-07
Monte Carlo (MC) calculations are a fundamental tool for the investigation of ionization chambers (ICs) in radiation fields, and for calculations in the scope of IC reference dosimetry. Geant4, as used for the toolkit TOPAS, is a major general purpose code, generally suitable for investigating ICs in primary proton beams. To provide reliable results, the impact of parameter settings and the limitations of the underlying condensed history (CH) algorithm need to be known. A Fano cavity test was implemented in Geant4 (10.03.p1) for protons, based on the existing version for electrons distributed with the Geant4 release. This self-consistent test allows the calculation to be compared with the expected result for the typical IC-like geometry of an air-filled cavity surrounded by a higher density material. Various user-selectable parameters of the CH implementation in the EMStandardOpt4 physics-list were tested for incident proton energies between 30 and 250 MeV. Using TOPAS (3.1.p1) the influence of production cuts was investigated for bare air-cavities in water, irradiated by primary protons. Detailed IC geometries for an NACP-02 plane-parallel chamber and an NE2571 Farmer-chamber were created. The overall factor f Q as a ratio between the dose-to-water and dose to the sensitive air-volume was calculated for incident proton energies between 70 and 250 MeV. The Fano test demonstrated the EMStandardOpt4 physics-list with the WentzelIV multiple scattering model as appropriate for IC calculations. If protons start perpendicular to the air cavity, no further step-size limitations are required to pass the test within 0.1%. For an isotropic source, limitations of the maximum step length within the air cavity and its surrounding as well as a limitation of the maximum fractional energy loss per step were required to pass within 0.2%. A production cut of ⩽5 μm or ∼15 keV for all particles yielded a constant result for f Q of bare air-filled cavities. The overall factor f Q for the detailed NACP-02 and NE2571 chamber models calculated with TOPAS agreed with the values of Gomà et al (2016 Phys. Med. Biol. 61 2389) within statistical uncertainties (1σ) of <0.3% for almost all energies with a maximum deviation of 0.6% at 250 MeV for the NE2571. The selection of hadronic scattering models (QGSP_BIC versus QGSP_BERT) in TOPAS impacted the results at the highest energies by 0.3% ± 0.1%. Based on the Fano cavity test, the Geant4/TOPAS Monte Carlo code, in its investigated version, can provide reliable results for IC calculations. Agreement with the detailed IC models and the published values of Gomà et al can be achieved when production cuts are reduced from the TOPAS default values. The calculations confirm the reported agreement of Gomà et al for [Formula: see text] with IAEA-TRS398 values within the given uncertainties. An additional uncertainty for the MC-calculated [Formula: see text] of ∼0.3% by hadronic interaction models should be considered.
NASA Astrophysics Data System (ADS)
Winkel, B. V.
1995-03-01
The purpose of this report is to document the Multi-Function Waste Tank Facility (MWTF) Project position on the concrete mechanical properties needed to perform design/analysis calculations for the MWTF secondary concrete structure. This report provides a position on MWTF concrete properties for the Title 1 and Title 2 calculations. The scope of the report is limited to mechanical properties and does not include the thermophysical properties of concrete needed to perform heat transfer calculations. In the 1970's, a comprehensive series of tests were performed at Construction Technology Laboratories (CTL) on two different Hanford concrete mix designs. Statistical correlations of the CTL data were later generated by Pacific Northwest Laboratories (PNL). These test results and property correlations have been utilized in various design/analysis efforts of Hanford waste tanks. However, due to changes in the concrete design mix and the lower range of MWTF operating temperatures, plus uncertainties in the CTL data and PNL correlations, it was prudent to evaluate the CTL data base and PNL correlations, relative to the MWTF application, and develop a defendable position. The CTL test program for Hanford concrete involved two different mix designs: a 3 kip/sq in mix and a 4.5 kip/sq in mix. The proposed 28-day design strength for the MWTF tanks is 5 kip/sq in. In addition to this design strength difference, there are also differences between the CTL and MWTF mix design details. Also of interest, are the appropriate application of the MWTF concrete properties in performing calculations demonstrating ACI Code compliance. Mix design details and ACI Code issues are addressed in Sections 3.0 and 5.0, respectively. The CTL test program and PNL data correlations focused on a temperature range of 250 to 450 F. The temperature range of interest for the MWTF tank concrete application is 70 to 200 F.
Assessment of doses caused by electrons in thin layers of tissue-equivalent materials, using MCNP.
Heide, Bernd
2013-10-01
Absorbed doses caused by electron irradiation were calculated with Monte Carlo N-Particle transport code (MCNP) for thin layers of tissue-equivalent materials. The layers were so thin that the calculation of energy deposition was on the border of the scope of MCNP. Therefore, in this article application of three different methods of calculation of energy deposition is discussed. This was done by means of two scenarios: in the first one, electrons were emitted from the centre of a sphere of water and also recorded in that sphere; and in the second, an irradiation with the PTB Secondary Standard BSS2 was modelled, where electrons were emitted from an (90)Sr/(90)Y area source and recorded inside a cuboid phantom made of tissue-equivalent material. The speed and accuracy of the different methods were of interest. While a significant difference in accuracy was visible for one method in the first scenario, the difference in accuracy of the three methods was insignificant for the second one. Considerable differences in speed were found for both scenarios. In order to demonstrate the need for calculating the dose in thin small zones, a third scenario was constructed and simulated as well. The third scenario was nearly equal to the second one, but a pike of lead was assumed to be inside the phantom in addition. A dose enhancement (caused by the pike of lead) of ∼113 % was recorded for a thin hollow cylinder at a depth of 0.007 cm, which the basal-skin layer is referred to in particular. Dose enhancements between 68 and 88 % were found for a slab with a radius of 0.09 cm for all depths. All dose enhancements were hardly noticeable for a slab with a cross-sectional area of 1 cm(2), which is usually applied to operational radiation protection.
NASA Astrophysics Data System (ADS)
Diamond, Roger E.; Jack, Sam
2018-04-01
Changes in the stable isotope composition of water can, with the aid of climatic parameters, be used to calculate the quantity of evaporation from a water body. Previous workers have mostly focused on small, research catchments, with abundant data, but of limited scope. This study aimed to expand such work to a regional or sub-continental scale. The first full length isotope survey of the Gariep River quantifies evaporation on the river and the man-made reservoirs for the first time, and proposes a technique to calculate abstraction from the river. The theoretically determined final isotope composition for an evaporating water body in the given climate lies on the empirically determined local evaporation line, validating the assumptions and inputs to the Craig-Gordon evaporation model that was used. Evaporation from the Gariep River amounts to around 20% of flow, or 40 m3/s, of which about half is due to evaporation from the surface of the Gariep and Vanderkloof Reservoirs, showing the wastefulness of large surface water impoundments. This compares well with previous estimates based on evapotranspiration calculations, and equates to around 1300 GL/a of water, or about the annual water consumption of Johannesburg and Pretoria, where over 10 million people reside. Using similar evaporation calculations and applying existing transpiration estimates to a gauged length of river, the remaining quantity can be attributed to abstraction, amounting to 175 L/s/km in the lower middle reaches of the river. Given that high water demand and climate change are global problems, and with the challenges of maintaining water monitoring networks, stable isotopes are shown to be applicable over regional to national scales for modelling hydrological flows. Stable isotopes provide a complementary method to conventional flow gauging for understanding hydrology and management of large water resources, particularly in arid areas subject to significant evaporation.
Nuclear modules for space electric propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1998-01-01
Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.
NASA Astrophysics Data System (ADS)
Quinn, P.
2015-12-01
The Arctic Monitoring and Assessment Programme (AMAP) established an Expert Group on Short-Lived Climate Forcers (SLCFs) in 2009 with the goal of reviewing the state of science surrounding SLCFs in the Arctic and recommending science tasks to improve the state of knowledge and its application to policy-making. In 2011, the result of the Expert Group's work was published in a technical report entitled The Impact of Black Carbon on Arctic Climate (AMAP, 2011). That report focused entirely on black carbon (BC) and co-emitted organic carbon (OC). The SLCFs Expert Group then expanded its scope to include all species co-emitted with BC as well as tropospheric ozone. An assessment report, entitled Black Carbon and Tropospheric Ozone as Arctic Climate Forcers, was published in 2015. The assessment includes summaries of measurement methods and emissions inventories of SLCFs, atmospheric transport of SLCFs to and within the Arctic, modeling methods for estimating the impact of SLCFs on Arctic climate, model-measurement inter-comparisons, trends in concentrations of SLCFs in the Arctic, and a literature review of Arctic radiative forcing and climate response. In addition, three Chemistry Climate Models and five Chemistry Transport Models were used to calculate Arctic burdens of SLCFs and precursors species, radiative forcing, and Arctic temperature response to the forcing. Radiative forcing was calculated for the direct atmospheric effect of BC, BC-snow/ice effect, and cloud indirect effects. Forcing and temperature response associated with different source sectors (Domestic, Energy+Industry+Waste, Transport, Agricultural waste burning, Forest fires, and Flaring) and source regions (United States, Canada, Russia, Nordic Countries, Rest of Europe, East and South Asia, Arctic, mid-latitudes, tropics, southern hemisphere) were calculated. To enable an evaluation of the cost-effectiveness of regional emission mitigation options, the normalized impacts (i.e., impacts per unit emission from each sector and region) were also calculated. Key findings from the 2015 assessment will be presented.
Fernandez-Calle, Pilar; Pelaz, Sandra; Oliver, Paloma; Alcaide, Maria Jose; Gomez-Rioja, Ruben; Buno, Antonio; Iturzaeta, Jose Manuel
2013-01-01
Technological innovation requires the laboratories to ensure that modifications or incorporations of new techniques do not alter the quality of their results. In an ISO 15189 accredited laboratory, flexible scope accreditation facilitates the inclusion of these changes prior to accreditation body evaluation. A strategy to perform the validation of a biochemistry analyzer in an accredited laboratory having a flexible scope is shown. A validation procedure including the evaluation of imprecision and bias of two Dimension Vista analysers 1500 was conducted. Comparability of patient results between one of them and the lately replaced Dimension RxL Max was evaluated. All studies followed the respective Clinical and Laboratory Standards Institute (CLSI) protocols. 30 chemistry assays were studied. Coefficients of variation, percent bias and total error were calculated for all tests and biological variation was considered as acceptance criteria. Quality control material and patient samples were used as test materials. Interchangeability of the results was established by processing forty patients' samples in both devices. 27 of the 30 studied parameters met allowable performance criteria. Sodium, chloride and magnesium did not fulfil acceptance criteria. Evidence of interchangeability of patient results was obtained for all parameters except magnesium, NT-proBNP, cTroponin I and C-reactive protein. A laboratory having a well structured and documented validation procedure can opt to get a flexible scope of accreditation. In addition, performing these activities prior to use on patient samples may evidence technical issues which must be corrected to minimize their impact on patient results.
INADEQUACY OF THORON DOSE CALCULATIONS FROM THORON PROGENY MEASUREMENT ALONE.
Lane-Smith, D; Wong, F K
2016-10-01
To determine the dose received by thoron ( 220 Rn) domestically, conventional methods measure the activity concentration of thoron progeny only (namely the 212 Pb atoms) and calculate the dose by using a set of conversion factors. This may be due to the measurement of progeny being simpler since it is longer lived and will be evenly spread throughout the room, whereas the thoron gas, with its short half-life, will exist only near the source and hence will not be of major concern for the majority of the room. However, concrete walls are a source of thoron, and spending prolonged amounts of time near them may lead to greatly increased radiation exposure, the degree of which is not revealed through progeny activity alone. The present paper compares the energy received from the ionising radiation of both thoron gas and thoron progeny near its source. Converting the energy dose to radiation dose is not within the scope of this paper. The results suggest a difference of an order of magnitude higher when taking into account the dose received by thoron gas. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeze, R.A.
Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a framework for quantifying the degree to which risk is reduced as mass is removed from shallow, saturated, low-permeability, dual-porosity, DNAPL source zones. Risk is defined in terms of meeting an alternate concentration level (ACL) at a compliance well in an aquifer underlying the source zone. Themore » ACL is back-calculated from a carcinogenic health-risk characterization at a downstream water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phases (dissolved, sorbed, free product). Due to the uncertainties in currently-available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making risk-reduction calculations for specific technologies. Despite the qualitative nature of the exercise, results imply that very high mass-removal efficiencies are required to achieve significant long-term risk reduction with technology, applications of finite duration. 17 refs., 7 figs., 6 tabs.« less
Volume dependence of N-body bound states
NASA Astrophysics Data System (ADS)
König, Sebastian; Lee, Dean
2018-04-01
We derive the finite-volume correction to the binding energy of an N-particle quantum bound state in a cubic periodic volume. Our results are applicable to bound states with arbitrary composition and total angular momentum, and in any number of spatial dimensions. The only assumptions are that the interactions have finite range. The finite-volume correction is a sum of contributions from all possible breakup channels. In the case where the separation is into two bound clusters, our result gives the leading volume dependence up to exponentially small corrections. If the separation is into three or more clusters, there is a power-law factor that is beyond the scope of this work, however our result again determines the leading exponential dependence. We also present two independent methods that use finite-volume data to determine asymptotic normalization coefficients. The coefficients are useful to determine low-energy capture reactions into weakly bound states relevant for nuclear astrophysics. Using the techniques introduced here, one can even extract the infinite-volume energy limit using data from a single-volume calculation. The derived relations are tested using several exactly solvable systems and numerical examples. We anticipate immediate applications to lattice calculations of hadronic, nuclear, and cold atomic systems.
Shielding calculations for the National Synchrotron Light Source-II experimental beamlines
NASA Astrophysics Data System (ADS)
Job, Panakkal K.; Casey, William R.
2013-01-01
Brookhaven National Laboratory is in the process of building a new Electron storage ring for scientific research using synchrotron radiation. This facility, called the "National Synchrotron Light Source II" (NSLS-II), will provide x-ray radiation of ultra-high brightness and exceptional spatial and energy resolution. It will also provide advanced insertion devices, optics, detectors, and robotics, designed to maximize the scientific output of the facility. The project scope includes the design of an electron storage ring and the experimental beamlines, which stores a maximum of 500 mA electron beam current at an energy of 3.0 GeV. When fully built there will be at least 58 beamlines using synchrotron radiation for experimental programs. It is planned to operate the facility primarily in a top-off mode, thereby maintaining the maximum variation in the synchrotron radiation flux to <1%. Because of the very demanding requirements for synchrotron radiation brilliance for the experiments, each of the 58 beamlines will be unique in terms of the source properties and experimental configuration. This makes the shielding configuration of each of the beamlines unique. The shielding calculation methodology and the results for five representative beamlines of NSLS-II, have been presented in this paper.
Naska, A; Trichopoulou, A
2001-08-01
The EU-supported project entitled: "Compatibility of household budget and individual nutrition surveys and disparities in food habits" aimed at comparing individualised household budget survey (HBS) data with food consumption values derived from individual nutrition surveys (INS). The present paper provides a brief description of the methodology applied for rendering the datasets at a comparable level. Results of the preliminary evaluation of their compatibility are also presented. A non parametric modelling approach was used for the individualisation (age and gender-specific) of the food data collected at household level, in the context of the national HBSs and the bootstrap technique was used for the derivation of 95% confidence intervals. For each food group, INS and HBS-derived mean values were calculated for twenty-four research units, jointly defined by country (four countries involved), gender (male, female) and age (younger, middle-aged and older). Pearson correlation coefficients were calculated. The results of this preliminary analysis show that there is considerable scope in the nutritional information derived from HBSs. Additional and more sophisticated work is however required, putting particular emphasis on addressing limitations present in both surveys and on deriving reliable individual consumption point and interval estimates, on the basis of HBS data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris
This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less
Sarigiannis, Dimosthenis A; Karakitsios, Spyros P; Gotti, Alberto; Papaloukas, Costas L; Kassomenos, Pavlos A; Pilidis, Georgios A
2009-01-01
The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.
Sarigiannis, Dimosthenis A.; Karakitsios, Spyros P.; Gotti, Alberto; Papaloukas, Costas L.; Kassomenos, Pavlos A.; Pilidis, Georgios A.
2009-01-01
The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations. PMID:22399936
Analysis of aerobic granular sludge formation based on grey system theory.
Zhang, Cuiya; Zhang, Hanmin
2013-04-01
Based on grey entropy analysis, the relational grade of operational parameters with aerobic granular sludge's granulation indicators was studied. The former consisted of settling time (ST), aeration time (AT), superficial gas velocity (SGV), height/diameter (H/D) ratio and organic loading rates (OLR), the latter included sludge volume index (SVI) and set-up time. The calculated result showed that for SVI and set-up time, the influence orders and the corresponding grey entropy relational grades (GERG) were: SGV (0.9935) > AT (0.9921) > OLR (0.9894) > ST (0.9876) > H/D (0.9857) and SGV (0.9928) > H/D (0.9914) > AT (0.9909) > OLR (0.9897) > ST (0.9878). The chosen parameters were all key impact factors as each GERG was larger than 0.98. SGV played an important role in improving SVI transformation and facilitating the set-up process. The influence of ST on SVI and set-up time was relatively low due to its dual functions. SVI transformation and rapid set-up demanded different optimal H/D ratio scopes (10-20 and 16-20). Meanwhile, different functions could be obtained through adjusting certain factors' scope.
The true cost of greenhouse gas emissions: analysis of 1,000 global companies.
Ishinabe, Nagisa; Fujii, Hidemichi; Managi, Shunsuke
2013-01-01
This study elucidated the shadow price of greenhouse gas (GHG) emissions for 1,024 international companies worldwide that were surveyed from 15 industries in 37 major countries. Our results indicate that the shadow price of GHG at the firm level is much higher than indicated in previous studies. The higher shadow price was found in this study as a result of the use of Scope 3 GHG emissions data. The results of this research indicate that a firm would carry a high cost of GHG emissions if Scope 3 GHG emissions were the focus of the discussion of corporate social responsibility. In addition, such shadow prices were determined to differ substantially among countries, among sectors, and within sectors. Although a number of studies have calculated the shadow price of GHG emissions, these studies have employed country-level or industry-level data or a small sample of firm-level data in one country. This new data from a worldwide firm analysis of the shadow price of GHG emissions can play an important role in developing climate policy and promoting sustainable development.
The True Cost of Greenhouse Gas Emissions: Analysis of 1,000 Global Companies
Ishinabe, Nagisa; Fujii, Hidemichi; Managi, Shunsuke
2013-01-01
This study elucidated the shadow price of greenhouse gas (GHG) emissions for 1,024 international companies worldwide that were surveyed from 15 industries in 37 major countries. Our results indicate that the shadow price of GHG at the firm level is much higher than indicated in previous studies. The higher shadow price was found in this study as a result of the use of Scope 3 GHG emissions data. The results of this research indicate that a firm would carry a high cost of GHG emissions if Scope 3 GHG emissions were the focus of the discussion of corporate social responsibility. In addition, such shadow prices were determined to differ substantially among countries, among sectors, and within sectors. Although a number of studies have calculated the shadow price of GHG emissions, these studies have employed country-level or industry-level data or a small sample of firm-level data in one country. This new data from a worldwide firm analysis of the shadow price of GHG emissions can play an important role in developing climate policy and promoting sustainable development. PMID:24265710
Nanotechnologies in Latvia: Commercialisation Aspect
NASA Astrophysics Data System (ADS)
Geipele, I.; Staube, T.; Ciemleja, G.; Ekmanis, J.; Zeltins, N.
2014-12-01
The authors consider the possibilities to apply the nanotechnology products of manufacturing industries in Latvia for further commercialisation. The purpose of the research is to find out the preliminary criteria for the system of engineering economic indicators for multifunctional nanocoating technologies. The article provides new findings and calculations for the local nanotechnology market research characterising the development of nanotechnology industry. The authors outline a scope of issues as to low activities rankings in Latvia on application of locally produced nanotechnologies towards efficiency of the resource use for nanocoating technologies. For the first time in Latvia, the authors make the case study research and summarise the latest performance indicators of the Latvian companies operating in the nanotechnology industry.
Murai Reaction on Furfural Derivatives Enabled by Removable N,N'-Bidentate Directing Groups.
Pezzetta, Cristofer; Veiros, Luis F; Oble, Julie; Poli, Giovanni
2017-06-22
Furfural and related compounds are industrially relevant building blocks obtained from lignocellulosic biomass. To enhance the added value of these renewable resources, a Ru-catalyzed hydrofurylation of alkenes, involving a directed C-H activation at C3 of the furan ring, was developed. A thorough experimental study revealed that a bidentate amino-imine directing group enabled the desired coupling. Removal of the directing group occurred during the purification step, directly releasing the C3-functionalized furfurals. Development of the reaction as well as optimization and scope of the method were described. A mechanism was proposed on the basis of DFT calculations. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pellet fuelling requirements to allow self-burning on a helical-type fusion reactor
NASA Astrophysics Data System (ADS)
Sakamoto, R.; Miyazawa, J.; Yamada, H.; Masuzaki, S.; Sagara, A.; the FFHR Design Group
2012-08-01
Pellet refuelling conditions to sustain a self-burning plasma have been investigated by extrapolating the confinement property of the LHD plasma, which appears to be governed by a gyro-Bohm-type confinement property. The power balance of the burning plasma is calculated taking into account the profile change with pellet deposition and subsequent density relaxation. A self-burning plasma is achieved within the scope of conventional pellet injection technology. However, a very small burn-up rate of 0.18% is predicted. Higher velocity pellet injection is effective in improving the burn-up rate by deepening particle deposition, whereas deep fuelling leads to undesirable fluctuation of the fusion output.
Generalized second law of thermodynamics in f(R,T) theory of gravity
NASA Astrophysics Data System (ADS)
Momeni, D.; Moraes, P. H. R. S.; Myrzakulov, R.
2016-07-01
We present a study of the generalized second law of thermodynamics in the scope of the f(R,T) theory of gravity, with R and T representing the Ricci scalar and trace of the energy-momentum tensor, respectively. From the energy-momentum tensor equation for the f(R,T)=R+f(T) case, we calculate the form of the geometric entropy in such a theory. Then, the generalized second law of thermodynamics is quantified and some relations for its obedience in f(R,T) gravity are presented. Those relations depend on some cosmological quantities, as the Hubble and deceleration parameters, and also on the form of f(T).
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
Devlin, Jennifer; Kerr, William J; Lindsay, David M; McCabe, Timothy J D; Reid, Marc; Tuttle, Tell
2015-06-25
Herein we report a combined experimental and theoretical study on the deuterium labelling of benzoate ester derivatives, utilizing our developed iridium N-heterocyclic carbene/phosphine catalysts. A range of benzoate esters were screened, including derivatives with electron-donating and -withdrawing groups in the para- position. The substrate scope, in terms of the alkoxy group, was studied and the nature of the catalyst counter-ion was shown to have a profound effect on the efficiency of isotope exchange. Finally, the observed chemoselectivity was rationalized by rate studies and theoretical calculations, and this insight was applied to the selective labelling of benzoate esters bearing a second directing group.
Petersdorff, Carsten; Boermans, Thomas; Harnisch, Jochen
2006-09-01
GOAL SCOPE AND BACKGROUND: The European Directive on Energy Performance of Buildings which came into force 16 December 2002 will be implemented in the legislation of Member States by 4 January 2006. In addition to the aim of improving the overall energy efficiency of new buildings, large existing buildings will become a target for improvement, as soon as they undergo significant renovation. The building sector is responsible for about 40% of Europe's total end energy consumption and hence this Directive is an important step for the European Union in order that it should reach the level of saving required by the Kyoto Agreement. In this the EU is committed to reduce CO2 emissions relative to the base year of 1990 by 8 per cent, by 2010. But what will be the impact of the new Directive, how large could be the impacts of extending the obligation for energy efficiency retrofitting towards smaller buildings? Can improvement of the insulation offset or reduce the growing energy consumption from the increasing installation of cooling installations? EURIMA, the European Insulation Manufacturers Association and EuroACE, the European Alliance of Companies for Energy Efficiency in Buildings, asked Ecofys to address these questions. The effect of the EPB Directive on the emissions associated with the heating energy consumption of the total EU 15 building stock has been examined in a model calculation, using the Built Environment Analysis Model (BEAM), which was developed by Ecofys to investigate energy saving measures in the building stock. The great complexity of the EU-15 building stock had to be simplified by examining five standard buildings with eight insulation standards, which are assigned to building age and renovation status. Furthermore, three climatic regions (cold, moderate, warm) were distinguished for the calculation of the heating energy demand. This gave a basic 210 building types for which the heating energy demand and CO2 emissions from heating were calculated according to the principles of the European Norm EN 832. The model calculations demonstrates that the main contributor to the total heating related CO2 emissions of 725 Mt/a from the EU building stock in 2002 is the residential sector (77%) while the remaining 23% originates from non-residential buildings. In the residential sector, single-family houses represent the largest group responsible for 60% of the total CO2 emissions equivalent to 435 Mt/a. THE TECHNICAL POTENTIAL: If all retrofit measures in the scope of the Directive were realised immediately for the complete residential and non-residential building stock the overall CO2 emission savings would add up to 82 Mt/a. An additional saving potential compared to the Directive of 69 Mt/a would be created if the scope of the Directive was extended to cover retrofit measures in multi-family dwellings (200-1000 m2) and non-residential buildings smaller than 1000 m2 used floor space. In addition including the large group of single-family dwellings would lead to a potential for additional CO2 emission reductions compared to the Directive of 316 Mt/a. TEMPORAL MOBILIZATION OF THE POTENTIAL: Calculations based on the building stock as it develops over time with average retrofit rates demonstrated that regulations introduced following the EPB Directive result in a CO2 emissions decrease of 34 Mt/a by the year 2010 compared to the business as usual scenario. Extending the scope of the EPB Directive to all residential buildings (including single and multi-family dwellings), the CO2 emission savings potential over the 'business as usual' scenario could be doubled to 69 Mt/a in the year 2010. This creates an additional saving potential compared to the Directive of 36 Mt/a. COOLING DEMAND: The analysis demonstrated that in warm climatic zones the cooling demand can be reduced drastically by a combination of lowering the internal heat loads and by improved insulation. With the reduction of the heat loads to a moderate level the cooling demand, e.g. of a terraced house located in Madrid, can be reduced by an additional 85% if the insulation level is improved appropriately. This study demonstrates that the European Directive on Energy Performance of Buildings will have a significant impact on the CO2 emissions of the European building stock. The main saving potential lies in insulation of the existing building stock. Beyond this, CO2 emissions could, however, be greatly reduced if the scope of the Directive were to be extended to include retrofit of smaller buildings. The reductions should be seen in relation to the remaining gap of 190 Mt CO2 eq. per annum between the current emission levels of EU-15 and the target under the Kyoto-Protocol for the year 2010. The energy and industrial sector will probably contribute only a fraction of this reduction via the newly established EU emissions trading scheme and connected projects under the flexible mechanism. In addition, the traffic sector is likely to continue its growth path leading to a widening of the gap. Thus, there is likely to be considerable pressure on the EU building sector to contribute to the EU climate targets beyond what will be achieved by means of the current EPB Directive. Legislators on the EU and national level are therefore advised to take accelerated actions to tap the very significant emission reduction potentials available in the EU building stock.
Longcroft-Wheaton, G; Brown, J; Cowlishaw, D; Higgins, B; Bhandari, P
2012-10-01
The resolution of endoscopes has increased in recent years. Modern Fujinon colonoscopes have a charge-coupled device (CCD) pixel density of 650,000 pixels compared with the 410,000 pixel CCD in standard-definition scopes. Acquiring high-definition scopes represents a significant capital investment and their clinical value remains uncertain. The aim of the current study was to investigate the impact of high-definition endoscopes on the in vivo histology prediction of colonic polyps. Colonoscopy procedures were performed using Fujinon colonoscopes and EPX-4400 processor. Procedures were randomized to be performed using either a standard-definition EC-530 colonoscope or high-definition EC-530 and EC-590 colonoscopes. Polyps of <10 mm were assessed using both white light imaging (WLI) and flexible spectral imaging color enhancement (FICE), and the predicted diagnosis was recorded. Polyps were removed and sent for histological analysis by a pathologist who was blinded to the endoscopic diagnosis. The predicted diagnosis was compared with the histology to calculate the accuracy, sensitivity, and specificity of in vivo assessment using either standard or high-definition scopes. A total of 293 polyps of <10 mm were examined–150 polyps using the standard-definition colonoscope and 143 polyps using high-definition colonoscopes. There was no difference in sensitivity, specificity or accuracy between the two scopes when WLI was used (standard vs. high: accuracy 70% [95% CI 62–77] vs. 73% [95% CI 65–80]; P=0.61). When FICE was used, high-definition colonoscopes showed a sensitivity of 93% compared with 83% for standard-definition colonoscopes (P=0.048); specificity was 81% and 82%, respectively. There was no difference between high- and standard-definition colonoscopes when white light was used, but FICE significantly improved the in vivo diagnosis of small polyps when high-definition scopes were used compared with standard definition.
Fernandez-Calle, Pilar; Pelaz, Sandra; Oliver, Paloma; Alcaide, Maria Jose; Gomez-Rioja, Ruben; Buno, Antonio; Iturzaeta, Jose Manuel
2013-01-01
Introduction Technological innovation requires the laboratories to ensure that modifications or incorporations of new techniques do not alter the quality of their results. In an ISO 15189 accredited laboratory, flexible scope accreditation facilitates the inclusion of these changes prior to accreditation body evaluation. A strategy to perform the validation of a biochemistry analyzer in an accredited laboratory having a flexible scope is shown. Materials and methods: A validation procedure including the evaluation of imprecision and bias of two Dimension Vista analysers 1500 was conducted. Comparability of patient results between one of them and the lately replaced Dimension RxL Max was evaluated. All studies followed the respective Clinical and Laboratory Standards Institute (CLSI) protocols. 30 chemistry assays were studied. Coefficients of variation, percent bias and total error were calculated for all tests and biological variation was considered as acceptance criteria. Quality control material and patient samples were used as test materials. Interchangeability of the results was established by processing forty patients’ samples in both devices. Results: 27 of the 30 studied parameters met allowable performance criteria. Sodium, chloride and magnesium did not fulfil acceptance criteria. Evidence of interchangeability of patient results was obtained for all parameters except magnesium, NT-proBNP, cTroponin I and C-reactive protein. Conclusions: A laboratory having a well structured and documented validation procedure can opt to get a flexible scope of accreditation. In addition, performing these activities prior to use on patient samples may evidence technical issues which must be corrected to minimize their impact on patient results. PMID:23457769
Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei
2016-04-06
Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators-depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance-to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O₂-A and O₂-B bands (111.4% and 77.1% in the O₂-A band; and 27.5% and 32.6% in the O₂-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R² = 0.91 for Damm vs. SCOPE SIF; R² = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence.
Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei
2016-01-01
Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators—depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance—to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O2-A and O2-B bands (111.4% and 77.1% in the O2-A band; and 27.5% and 32.6% in the O2-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R2 = 0.91 for Damm vs. SCOPE SIF; R2 = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence. PMID:27058542
Wood, Andrew T; Clark, Timothy D; Andrewartha, Sarah J; Elliott, Nicholas G; Frappell, Peter B
Exposure to developmental hypoxia can have long-term impacts on the physiological performance of fish because of irreversible plasticity. Wild and captive-reared Atlantic salmon (Salmo salar) can be exposed to hypoxic conditions during development and continue to experience fluctuating oxygen levels as juveniles and adults. Here, we examine whether developmental hypoxia impacts subsequent hypoxia tolerance and aerobic performance of Atlantic salmon. Individuals at 8°C were exposed to 50% (hypoxia) or 100% (normoxia) dissolved oxygen (DO) saturation (as percent of air saturation) from fertilization for ∼100 d (800 degree days) and then raised in normoxic conditions for a further 15 mo. At 18 mo after fertilization, aerobic scope was calculated in normoxia (100% DO) and acute (18 h) hypoxia (50% DO) from the difference between the minimum and maximum oxygen consumption rates ([Formula: see text] and [Formula: see text], respectively) at 10°C. Hypoxia tolerance was determined as the DO at which loss of equilibrium (LOE) occurred in a constantly decreasing DO environment. There was no difference in [Formula: see text], [Formula: see text], or aerobic scope between fish raised in hypoxia or normoxia. There was some evidence that hypoxia tolerance was lower (higher DO at LOE) in hypoxia-raised fish compared with those raised in normoxia, but the magnitude of the effect was small (12.52% DO vs. 11.73% DO at LOE). Acute hypoxia significantly reduced aerobic scope by reducing [Formula: see text], while [Formula: see text] remained unchanged. Interestingly, acute hypoxia uncovered individual-level relationships between DO at LOE and [Formula: see text], [Formula: see text], and aerobic scope. We discuss our findings in the context of developmental trajectories and the role of aerobic performance in hypoxia tolerance.
Richter-Schrag, Hans-Jürgen; Glatz, Torben; Walker, Christine; Fischer, Andreas; Thimme, Robert
2016-01-01
AIM To evaluate rebleeding, primary failure (PF) and mortality of patients in whom over-the-scope clips (OTSCs) were used as first-line and second-line endoscopic treatment (FLET, SLET) of upper and lower gastrointestinal bleeding (UGIB, LGIB). METHODS A retrospective analysis of a prospectively collected database identified all patients with UGIB and LGIB in a tertiary endoscopic referral center of the University of Freiburg, Germany, from 04-2012 to 05-2016 (n = 93) who underwent FLET and SLET with OTSCs. The complete Rockall risk scores were calculated from patients with UGIB. The scores were categorized as < or ≥ 7 and were compared with the original Rockall data. Differences between FLET and SLET were calculated. Univariate and multivariate analysis were performed to evaluate the factors that influenced rebleeding after OTSC placement. RESULTS Primary hemostasis and clinical success of bleeding lesions (without rebleeding) was achieved in 88/100 (88%) and 78/100 (78%), respectively. PF was significantly lower when OTSCs were applied as FLET compared to SLET (4.9% vs 23%, P = 0.008). In multivariate analysis, patients who had OTSC placement as SLET had a significantly higher rebleeding risk compared to those who had FLET (OR 5.3; P = 0.008). Patients with Rockall risk scores ≥ 7 had a significantly higher in-hospital mortality compared to those with scores < 7 (35% vs 10%, P = 0.034). No significant differences were observed in patients with scores < or ≥ 7 in rebleeding and rebleeding-associated mortality. CONCLUSION Our data show for the first time that FLET with OTSC might be the best predictor to successfully prevent rebleeding of gastrointestinal bleeding compared to SLET. The type of treatment determines the success of primary hemostasis or primary failure. PMID:27895403
Sousa-Figueiredo, José Carlos; Oguttu, David; Adriko, Moses; Besigye, Fred; Nankasi, Andrina; Arinaitwe, Moses; Namukuta, Annet; Betson, Martha; Kabatereine, Narcis B; Stothard, J Russell
2010-08-27
Prompt and correct diagnosis of malaria is crucial for accurate epidemiological assessment and better case management, and while the gold standard of light microscopy is often available, it requires both expertise and time. Portable fluorescent microscopy using the CyScope offers a potentially quicker, easier and more field-applicable alternative. This article reports on the strengths, limitations of this methodology and its diagnostic performance in cross-sectional surveys on young children and women of child-bearing age. 552 adults (99% women of child-bearing age) and 980 children (99% ≤ 5 years of age) from rural and peri-urban regions of Ugandan were examined for malaria using light microscopy (Giemsa-stain), a lateral-flow test (Paracheck-Pf) and the CyScope. Results from the surveys were used to calculate diagnostic performance (sensitivity and specificity) as well as to perform a receiver operating characteristics (ROC) analyses, using light microscopy as the gold-standard. Fluorescent microscopy (qualitative reads) showed reduced specificity (<40%), resulting in higher community prevalence levels than those reported by light microscopy, particularly in adults (+180% in adults and +20% in children). Diagnostic sensitivity was 92.1% in adults and 86.7% in children, with an area under the ROC curve of 0.63. Importantly, optimum performance was achieved for higher parasitaemia (>400 parasites/μL blood): sensitivity of 64.2% and specificity of 86.0%. Overall, the diagnostic performance of the CyScope was found inferior to that of Paracheck-Pf. Fluorescent microscopy using the CyScope is certainly a field-applicable and relatively affordable solution for malaria diagnoses especially in areas where electrical supplies may be lacking. While it is unlikely to miss higher parasitaemia, its application in cross-sectional community-based studies leads to many false positives (i.e. small fluorescent bodies of presently unknown origin mistaken as malaria parasites). Without recourse to other technologies, arbitration of these false positives is presently equivocal, which could ultimately lead to over-treatment; something that should be further explored in future investigations if the CyScope is to be more widely implemented.
DISSOLVED CONCENTRATION LIMITS OF RADIOACTIVE ELEMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Bernot
The purpose of this study is to evaluate dissolved concentration limits (also referred to as solubility limits) of elements with radioactive isotopes under probable repository conditions, based on geochemical modeling calculations using geochemical modeling tools, thermodynamic databases, field measurements, and laboratory experiments. The scope of this activity is to predict dissolved concentrations or solubility limits for elements with radioactive isotopes (actinium, americium, carbon, cesium, iodine, lead, neptunium, plutonium, protactinium, radium, strontium, technetium, thorium, and uranium) relevant to calculated dose. Model outputs for uranium, plutonium, neptunium, thorium, americium, and protactinium are provided in the form of tabulated functions with pH andmore » log fCO{sub 2} as independent variables, plus one or more uncertainty terms. The solubility limits for the remaining elements are either in the form of distributions or single values. Even though selection of an appropriate set of radionuclides documented in Radionuclide Screening (BSC 2002 [DIRS 160059]) includes actinium, transport of Ac is not modeled in the total system performance assessment for the license application (TSPA-LA) model because of its extremely short half-life. Actinium dose is calculated in the TSPA-LA by assuming secular equilibrium with {sup 231}Pa (Section 6.10); therefore, Ac is not analyzed in this report. The output data from this report are fundamental inputs for TSPA-LA used to determine the estimated release of these elements from waste packages and the engineered barrier system. Consistent modeling approaches and environmental conditions were used to develop solubility models for the actinides discussed in this report. These models cover broad ranges of environmental conditions so they are applicable to both waste packages and the invert. Uncertainties from thermodynamic data, water chemistry, temperature variation, and activity coefficients have been quantified or otherwise addressed.« less
NASA Astrophysics Data System (ADS)
Shmeleva, O. P.
The flare transition layer exists as a relatively steady formation even during impulsive heating. It is maintained by a heat flow from the high-temperature plasma, where the major part of the electron beam energy is absorbed. The lifetime of this plasma is much greater than the impulsive heating time. Intensities of resonance UV lines are calculated using both the model of impulsive nonthermal heating by energetic electrons and the model of continuous thermal heating. The calculated line intensity is almost constant during a long time. The line Doppler shifts predicted by the former model match observations. This suggests that the model represents sufficiently well the actual dynamics of the flare plasma. The flare transition layer is a thin formation, its thickness being Δξ = 1021m-2. It is therefore described adequately within the p = const approximation though the picture of hydrodynamic response of the solar atmosphere to the impulsive heating by energy flows is rather complicated and nonsteady, of course. The intensities of the C IV λλ154.8, 155.1 nm and O VI λλ103.2, 103.8 nm lines are calculated within the scope of the model of continuous thermal heating, in which the conductive heating of the flare transition layer is balanced by radiative cooling. The line intensities are proportional to the pressure in the layer, which permits the pressure to be found from the observed line intensities. The analysis reveals that both heating models adequately represent the actual structure and dynamics of plasma in a flare. In the flare transition layer, the classical heat conduction always does work.
Iversen, B S; Sabbioni, E; Fortaner, S; Pietra, R; Nicolotti, A
2003-01-20
Statistical data treatment is a key point in the assessment of trace element reference values being the conclusive stage of a comprehensive and organized evaluation process of metal concentration in human body fluids. The EURO TERVIHT project (Trace Elements Reference Values in Human Tissues) was started for evaluating, checking and suggesting harmonized procedures for the establishment of trace element reference intervals in body fluids and tissues. Unfortunately, different statistical approaches are being used in this research field making data comparison difficult and in some cases impossible. Although international organizations such as International Federation of Clinical Chemistry (IFCC) or International Union of Pure and Applied Chemistry (IUPAC) have issued recommended guidelines for reference values assessment, including the statistical data treatment, a unique format and a standardized data layout is still missing. The aim of the present study is to present a software (BioReVa) running under Microsoft Windows platform suitable for calculating the reference intervals of trace elements in body matrices. The main scope for creating an ease-of-use application was to control the data distribution, to establish the reference intervals according to the accepted recommendation, on the base of the simple statistic, to get a standard presentation of experimental data and to have an application to which further need could be integrated in future. BioReVa calculates the IFCC reference intervals as well as the coverage intervals recommended by IUPAC as a supplement to the IFCC intervals. Examples of reference values and reference intervals calculated with BioReVa software concern Pb and Se in blood; Cd, In and Cr in urine, Hg and Mo in hair of different general European populations. University of Michigan
Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas
2013-01-01
The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckingham, P.A.; Cobb, D.D.; Leavitt, A.A.
1981-08-01
This report presents the results of a technical and economic evaluation of producing methanol from bituminous coal using Texaco coal gasification and ICI methanol synthesis. The scope of work included the development of an overall configuration for a large plant comprising coal preparation, air separation, coal gasification, shift conversion, COS hydrolysis, acid gas removal, methanol synthesis, methanol refining, and all required utility systems and off-site facilities. Design data were received from both Texaco and ICI while a design and cost estimate were received from Lotepro covering the Rectisol acid gas removal unit. The plant processes 14,448 tons per day (drymore » basis) of Illinois No. 6 bituminous coal and produces 10,927 tons per day of fuel-grade methanol. An overall thermal efficiency of 57.86 percent was calculated on an HHV basis and 52.64 percent based on LHV. Total plant investment at an Illinois plant site was estimated to be $1159 million dollars in terms of 1979 investment. Using EPRI's economic premises, the first-year product costs were calculated to $4.74 per million Btu (HHV) which is equivalent to $30.3 cents per gallon and $5.37 per million Btu (LHV).« less
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Suspended sediment measurements and calculation of the particle load at HPP Fieschertal
NASA Astrophysics Data System (ADS)
Felix, D.; Albayrak, I.; Abgottspon, A.; Boes, R. M.
2016-11-01
In the scope of a research project on hydro-abrasive erosion of Pelton turbines, a field study was conducted at the high-head HPP Fieschertal in Valais, Switzerland. The suspended sediment mass concentration (SSC) and particle size distribution (PSD) in the penstock have been continuously measured since 2012 using a combination of six measuring techniques. The SSC was on average 0.52 g/l and rose to 50 g/l in a major flood event in July 2012. The median particle size d 50 was usually 15 pm, rising up to 100 μm when particles previously having settled in the headwater storage tunnel were re-suspended at low water levels. The annual suspended sediment loads (SSL) varied considerably depending on flood events. Moreover, so-called particle loads (PLs) according to the relevant guideline of the International Electrotechnical Commission (IEC 62364) were calculated using four relations between particle size and the relative abrasion potential. For the investigated HPP, the time series of the SSL and the PLs had generally similar shapes over the three years. The largest differences among the PLs were observed during re-suspension events when the particles were considerably coarser than usual. Further investigations on the effects of particle sizes on hydroabrasive erosion of splitters and cut-outs of coated Pelton turbines are recommended.
Quantum mechanical force fields for condensed phase molecular simulations
NASA Astrophysics Data System (ADS)
Giese, Timothy J.; York, Darrin M.
2017-09-01
Molecular simulations are powerful tools for providing atomic-level details into complex chemical and physical processes that occur in the condensed phase. For strongly interacting systems where quantum many-body effects are known to play an important role, density-functional methods are often used to provide the model with the potential energy used to drive dynamics. These methods, however, suffer from two major drawbacks. First, they are often too computationally intensive to practically apply to large systems over long time scales, limiting their scope of application. Second, there remain challenges for these models to obtain the necessary level of accuracy for weak non-bonded interactions to obtain quantitative accuracy for a wide range of condensed phase properties. Quantum mechanical force fields (QMFFs) provide a potential solution to both of these limitations. In this review, we address recent advances in the development of QMFFs for condensed phase simulations. In particular, we examine the development of QMFF models using both approximate and ab initio density-functional models, the treatment of short-ranged non-bonded and long-ranged electrostatic interactions, and stability issues in molecular dynamics calculations. Example calculations are provided for crystalline systems, liquid water, and ionic liquids. We conclude with a perspective for emerging challenges and future research directions.
Cormack, Barbara E; Embleton, Nicholas D; van Goudoever, Johannes B; Hay, William W; Bloomfield, Frank H
2016-06-01
The ultimate goal of neonatal nutrition care is optimal growth, neurodevelopment, and long-term health for preterm babies. International consensus is that increased energy and protein intakes in the neonatal period improve growth and neurodevelopment, but after more than 100 y of research the optimum intakes of energy and protein remain unknown. We suggest an important factor contributing to the lack of progress is the lack of a standardized approach to reporting nutritional intake data and growth in the neonatal literature. We reviewed randomized controlled trials and observational studies documented in MEDLINE and the Web of Science from 2008 to 2015 that compared approximately 3 vs. 4 g.kg(-1).d(-1) protein for preterm babies in the first month after birth. Consistency might be expected in the calculation of nutritional intake and assessment of growth outcomes in this relatively narrow scope of neonatal nutrition research. Twenty-two studies were reviewed. There was substantial variation in methods used to estimate and calculate nutritional intakes and in the approaches used in reporting these intakes and measures of infant growth. Such variability makes comparisons amongst studies difficult and meta-analysis unreliable. We propose the StRONNG Checklist-Standardized Reporting Of Neonatal Nutrition and Growth to address these issues.
Coda Wave Analysis in Central-Western North America Using Earthscope Transportable Array Data
NASA Astrophysics Data System (ADS)
Escudero, C. R.; Doser, D. I.
2011-12-01
We determined seismic wave attenuation in the western and central United States (e.g. Washington, Oregon, California, Idaho, Nevada, Montana, Wyoming, Colorado, New Mexico, North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, and Texas) using coda waves. We selected approximately twenty moderate earthquakes (magnitude between 5.5 and 6.5) located along the Mexican subduction zone, Gulf of California, southern and northern California, and off the coast of Oregon for the analysis. These events were recorded by the EarthScope transportable array (TA) network from 2008 to 2011. In this study we implemented a method based on the assumption that coda waves are single backscattered waves from randomly distributed heterogeneities to calculate the coda Q. The frequencies studied lie between 1 and 15 Hz. The scattering attenuation is calculated for frequency bands centered at 1.5, 3, 5, 7.5, 10.5, and 13.5 Hz. In this work, we present coda Q resolution maps along with a correlation analysis between coda Q and seismicity, tectonic and geology setting. We observed higher attenuation (low coda Q values) in regions of sedimentary cover, and lower attenuation (high coda Q values) in hard rock regions. Using the 4-6 Hz frequency band, we found the best general correlation between coda Q and central-western North America bedrock geology.
Current status of liquid sheet radiator research
NASA Technical Reports Server (NTRS)
Chubb, Donald L.; Calfo, Frederick D.; Mcmaster, Matthew S.
1993-01-01
Initial research on the external flow, low mass liquid sheet radiator (LSR), has been concentrated on understanding its fluid mechanics. The surface tension forces acting at the edges of the sheet produce a triangular planform for the radiating surface of width, W, and length, L. It has been experimentally verified that (exp L)/W agrees with the theoretical result, L/W = (We/8)exp 1/2, where We is the Weber number. Instability can cause holes to form in regions of large curvature such as where the edge cylinders join the sheet of thickness, tau. The W/tau limit that will cause hole formation with subsequent destruction of the sheet has yet to be reached experimentally. Although experimental measurements of sheet emissivity have not yet been performed because of limited program scope, calculations of the emissivity and sheet lifetime is determined by evaporation losses were made for two silicon based oils; Dow Corning 705 and Me(sub 2). Emissivities greater than 0.75 are calculated for tau greater than or equal to 200 microns for both oils. Lifetimes for Me(sub 2) are much longer than lifetimes for 705. Therefore, Me(sub 2) is the more attractive working fluid for higher temperatures (T greater than or equal to 400 K).
[A battle won: the elimination of poliomyelitis in Cuba].
Chaple, Enrique Beldarraín
2015-01-01
Poliomyelitis was introduced in Cuba in the late nineteenth century by American residents in Isla de Pinos. The first epidemics occurred in 1906 and 1909 and increased in intensity between 1930 and 1958. The scope of the paper is to reconstruct the history of the disease and its epidemics in Cuba prior to 1961, the first National Polio Vaccination Campaign (1962) and its results, as well as analyze the ongoing annual vaccination campaigns through to certified elimination of the disease (1994). The logical historical method was used and archival documents and statistics from the Ministry of Health on morbidity and mortality through 2000 were reviewed. Gross morbidity and mortality rates were calculated and interviews with key figures were conducted.
Automated Installation Verification of COMSOL via LiveLink for MATLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowell, Michael W
Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less
NASA Astrophysics Data System (ADS)
Atgın, Orhan; Çifçi, Günay; Soelien, Christopher; Seeber, Leonardo; Steckler, Michael; Shillington, Donna; Kurt, Hülya; Dondurur, Derman; Okay, Seda; Gürçay, Savaş; Sarıtaş, Hakan; Mert Küçük, H.; Barın, Burcu
2013-04-01
Marmara Sea is a limelight area for investigations due to its tectonic structure and remarkable seismic activity of North Anatolian Fault Zone (NAFZ). As NAFZ separates into 3 branches in the Marmara Sea, it has a complicated tectonic structure which gives rise to debates among researchers. Çınarcık Basin, which is close to Istanbul and very important for its tectonic activity is studied in this thesis. Two different multichannel seismic reflection data were used in this thesis. First data were acquired in 2008 in the frame of TAMAM (Turkish American Multichannel Project) and second data were in 2010 in the frame of TAMAM-2 (PirMarmara) onboard R/V K.Piri Reis. Also high resolution multibeam data were used which is provided by French Marine Institute IFREMER. In the scope of TAMAM project total 3000 km high resolution multi channel data were collected. 3000 km of multichannel seismic reflection profiles were collected in 2008 and 2010 using 72, 111, and 240 channels of streamer with a 6.25 m group interval. The generator-injector airgun was fired every 12.5 or 18.75 m and the resulting MCS data has 10-230 Hz frequency band. In this study, a detailed fault map of the basin is created and the fault on the southern slope of the basin which is interpreted by many researchers in many publications was investigated. And there is no evidence that such a fault exists on the southern part of the basin. With the multichannel seismic reflection data seismic stratigrafic interpretations of the basin deposits were done. The yearly cumulative north-south extension of the basin was calculated by making some calculations on the most active part of the faulting in the basin. In addition, the tilt angles of parallel tilted sediments were calculated and correlated with global sea level changes to calculate ages of the deposits in the basin. Keywords: NAFZ, multi channel seismic reflection, Çınarcık Basin
Experiment Needs and Facilities Study Appendix A Transient Reactor Test Facility (TREAT) Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The TREAT Upgrade effort is designed to provide significant new capabilities to satisfy experiment requirements associated with key LMFBR Safety Issues. The upgrade consists of reactor-core modifications to supply the physics performance needed for the new experiments, an Advanced TREAT loop with size and thermal-hydraulics capabilities needed for the experiments, associated interface equipment for loop operations and handling, and facility modifications necessary to accommodate operations with the Loop. The costs and schedules of the tasks to be accomplished under the TREAT Upgrade project are summarized. Cost, including contingency, is about 10 million dollars (1976 dollars). A schedule for execution ofmore » 36 months has been established to provide the new capabilities in order to provide timely support of the LMFBR national effort. A key requirement for the facility modifications is that the reactor availability will not be interrupted for more than 12 weeks during the upgrade. The Advanced TREAT loop is the prototype for the STF small-bundle package loop. Modified TREAT fuel elements contain segments of graphite-matrix fuel with graded uranium loadings similar to those of STF. In addition, the TREAT upgrade provides for use of STF-like stainless steel-UO{sub 2} TREAT fuel for tests of fully enriched fuel bundles. This report will introduce the Upgrade study by presenting a brief description of the scope, performance capability, safety considerations, cost schedule, and development requirements. This work is followed by a "Design Description". Because greatly upgraded loop performance is central to the upgrade, a description is given of Advanced TREAT loop requirements prior to description of the loop concept. Performance requirements of the upgraded reactor system are given. An extensive discussion of the reactor physics calculations performed for the Upgrade concept study is provided. Adequate physics performance is essential for performance of experiments with the Advanced TREAT loop, and the stress placed on these calculations reflects this. Additional material on performance and safety is provided. Backup calculations on calculations of plutonium-release limits are described. Cost and schedule information for the Upgrade are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pannala, S; D'Azevedo, E; Zacharia, T
The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% ofmore » the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, P.J.
A forthcoming revision to the R6 Leak-before-Break Assessment Procedure is briefly described. Practical application of the LbB concepts to safety-critical nuclear plant is illustrated by examples covering both low temperature and high temperature (>450{degrees}C) operating regimes. The examples highlight a number of issues which can make the development of a satisfactory LbB case problematic: for example, coping with highly loaded components, methodology assumptions and the definition of margins, the effect of crack closure owing to weld residual stresses, complex thermal stress fields or primary bending fields, the treatment of locally high stresses at crack intersections with free surfaces, the choicemore » of local limit load solution when predicting ligament breakthrough, and the scope of calculations required to support even a simplified LbB case for high temperature steam pipe-work systems.« less
Handbook of aircraft noise metrics
NASA Technical Reports Server (NTRS)
Bennett, R. L.; Pearsons, K. S.
1981-01-01
Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.
Handbook of aircraft noise metrics
NASA Astrophysics Data System (ADS)
Bennett, R. L.; Pearsons, K. S.
1981-03-01
Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.
A comparison of different methods to implement higher order derivatives of density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, Hubertus J.J.
Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less
Gemoets, Hannes P. L.; Kalvet, Indrek; Nyuchev, Alexander V.; Erdmann, Nico; Hessel, Volker
2017-01-01
A mild and selective C–H arylation strategy for indoles, benzofurans and benzothiophenes is described. The arylation method engages aryldiazonium salts as arylating reagents in equimolar amounts. The protocol is operationally simple, base free, moisture tolerant and air tolerant. It utilizes low palladium loadings (0.5 to 2.0 mol% Pd), short reaction times, green solvents (EtOAc/2-MeTHF or MeOH) and is carried out at room temperature, providing a broad substrate scope (47 examples) and excellent selectivity (C-2 arylation for indoles and benzofurans, C-3 arylation for benzothiophenes). Mechanistic experiments and DFT calculations support a Heck–Matsuda type coupling mechanism. PMID:28451243
Development of a low energy electron spectrometer for SCOPE
NASA Astrophysics Data System (ADS)
Tominaga, Y.; Saito, Y.; Yokota, S.
2010-12-01
We are newly developing an electrostatic analyzer which measures low energy electrons for the future satellite mission SCOPE (cross Scale COupling in the Plasma universE). The main purpose of the SCOPE mission is to understand the cross scale coupling between macroscopic MHD scale phenom- ena and microscopic ion and electron scale phenomena. In order to understand the dynamics of plasma in such small scales, we need to observe the plasma with an analyzer which has high time resolutions. In the Earth's magnetosphere, typical timescale of plasma cyclotron frequency is ~10 sec (ions) and ~ 10 msec (electrons). In order to conduct electron-scale observations, an analyzer which has a very high time resolution(~ 10 msec) is necessary for the experiment. So far, we decided a design of the analyzer. The analyzer has three nested spherical/toroidal deflectors, which enables us to measure two different energies simultaneously and shorten the time resolution of the experiment. In order to obtain 3D velocity distribution functions of electrons, the analyzer must have 4-pi steradian field of view. We will install 8 sets of the analyzers on the satellite. Using all these analyzers we will secure 4-pi str fov at the same time. In the experiment, we plan to measure electrons from 10 eV to 22.5 keV with 32 steps. Given that the sampling time of the experiment is 0.5 msec, it takes about 8 msec to measure the whole energy range, then the time resolution of the experiment is 8 msec. The energy and angular resolution of the inner analyzer is 0.23 and 16 degrees, respectively, and that of the outer analyzer is 0.17 and 11.5 degrees, respectively. To measure enough electrons within the sampling time, the analyzer is designed to have geometrical factors (sensitivities) of 7.5e-3 (inner analyzer) and 1.0e-2 (outer analyzer) cm-2 str-1, respectively. However, it is not apparent that these characteristics of the analyzer is really appropriate for the experiment. And there are some operational problems which we have to consider and resolve. In this study, we ... 1.confirm that the analyzer we designed has characteristics appropriate for the experiment and it can measure the 3D distribution function and velocity moments of electrons. 2.estimate how the non-uniformity of the analyzer's efficiency affects the velocity moments. 3.estimate how spin motion of the satellite affects the velocity moments. Assuming Maxwellian electron distribution function with known density, bulk velocity, and temperature, we calculated the counts that the analyzer will measure taking into account the characteristic of the analyzer. Using these counts, we calculated the distribution function and velocity moments, and compared the results with the assumed density, bulk velocity and temperature in order to see the precision of the experiment. From these calculations we found that ... 1.the characteristics of the analyzer are good enough to measure the velocity moments of electrons with an error less than several percent. 2.the non-uniformity of the efficiency of the analyzers will severely affect the bulk velocity of electrons. 3.we should have special observation modes (to change the time resolution or energy range) which depends on the observation area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-04-01
The design calculations for the Waste Isolation Pilot Plant (WIPP) are presented. The following categories are discussed: general nuclear calculations; radwaste calculations; structural calculations; mechanical calculations; civil calculations; electrical calculations; TRU waste surface facility time and motion analysis; shaft sinking procedures; hoist time and motion studies; mining system analysis; mine ventilation calculations; mine structural analysis; and miscellaneous underground calculations.
Role of hearing AIDS in tinnitus intervention: a scoping review.
Shekhawat, Giriraj Singh; Searchfield, Grant D; Stinear, Cathy M
2013-09-01
Tinnitus can have a devastating impact on the quality of life of the sufferer. Although the mechanisms underpinning tinnitus remain uncertain, hearing loss is often associated with its onset, and hearing aids are among the most commonly used tools for its management. To conduct a scoping review to explore the role of hearing aids in tinnitus management. Scoping review based on the six-stage framework of Arksey and O'Malley (2005). Relevant studies were identified using various databases (Scopus, Google Scholar, SpringerLink, and PubMed) and hand searching of journals and a reference list of articles. Out of 277 shortlisted articles, 29 studies (18 research studies and 11 reviews) were chosen for charting of data based on their abstracts. Tinnitus assessment measures used in studies were recorded along with changes in their scores. Measures used in studies included the Tinnitus Handicap Inventory (THI), Tinnitus Handicap Questionnaire (THQ), Tinnitus Severity Index (TSI), Tinnitus Reaction Questionnaire (TRQ), German version of Tinnitus Questionnaire (TQ), Beck Depression Inventory (BDI), and visual analogue scale (VAS) of tinnitus intensity. Where possible Cohen's d effect size statistic was calculated. Although the quality of evidence for hearing aids' effect on tinnitus is not strong, the weight of evidence (17 research studies for, 1 against) suggests merit in using hearing aids for tinnitus management. The majority of studies reviewed support the use of hearing aids for tinnitus management. Clinicians should feel reassured that some evidence shows support for the use of hearing aids for treating tinnitus, but there is still a need for stronger methodology and randomized control trials. American Academy of Audiology.
Harmony Search as a Powerful Tool for Feature Selection in QSPR Study of the Drugs Lipophilicity.
Bahadori, Behnoosh; Atabati, Morteza
2017-01-01
Aims & Scope: Lipophilicity represents one of the most studied and most frequently used fundamental physicochemical properties. In the present work, harmony search (HS) algorithm is suggested to feature selection in quantitative structure-property relationship (QSPR) modeling to predict lipophilicity of neutral, acidic, basic and amphotheric drugs that were determined by UHPLC. Harmony search is a music-based metaheuristic optimization algorithm. It was affected by the observation that the aim of music is to search for a perfect state of harmony. Semi-empirical quantum-chemical calculations at AM1 level were used to find the optimum 3D geometry of the studied molecules and variant descriptors (1497 descriptors) were calculated by the Dragon software. The selected descriptors by harmony search algorithm (9 descriptors) were applied for model development using multiple linear regression (MLR). In comparison with other feature selection methods such as genetic algorithm and simulated annealing, harmony search algorithm has better results. The root mean square error (RMSE) with and without leave-one out cross validation (LOOCV) were obtained 0.417 and 0.302, respectively. The results were compared with those obtained from the genetic algorithm and simulated annealing methods and it showed that the HS is a helpful tool for feature selection with fine performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Jolanta Walery, Maria
2017-12-01
The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.
Nonlinear Analysis of Time Series in Genome-Wide Linkage Disequilibrium Data
NASA Astrophysics Data System (ADS)
Hernández-Lemus, Enrique; Estrada-Gil, Jesús K.; Silva-Zolezzi, Irma; Fernández-López, J. Carlos; Hidalgo-Miranda, Alfredo; Jiménez-Sánchez, Gerardo
2008-02-01
The statistical study of large scale genomic data has turned out to be a very important tool in population genetics. Quantitative methods are essential to understand and implement association studies in the biomedical and health sciences. Nevertheless, the characterization of recently admixed populations has been an elusive problem due to the presence of a number of complex phenomena. For example, linkage disequilibrium structures are thought to be more complex than their non-recently admixed population counterparts, presenting the so-called ancestry blocks, admixed regions that are not yet smoothed by the effect of genetic recombination. In order to distinguish characteristic features for various populations we have implemented several methods, some of them borrowed or adapted from the analysis of nonlinear time series in statistical physics and quantitative physiology. We calculate the main fractal dimensions (Kolmogorov's capacity, information dimension and correlation dimension, usually named, D0, D1 and D2). We also have made detrended fluctuation analysis and information based similarity index calculations for the probability distribution of correlations of linkage disequilibrium coefficient of six recently admixed (mestizo) populations within the Mexican Genome Diversity Project [1] and for the non-recently admixed populations in the International HapMap Project [2]. Nonlinear correlations showed up as a consequence of internal structure within the haplotype distributions. The analysis of these correlations as well as the scope and limitations of these procedures within the biomedical sciences are discussed.
Structure-Activity Relationships for Rates of Aromatic Amine Oxidation by Manganese Dioxide.
Salter-Blanc, Alexandra J; Bylaska, Eric J; Lyon, Molly A; Ness, Stuart C; Tratnyek, Paul G
2016-05-17
New energetic compounds are designed to minimize their potential environmental impacts, which includes their transformation and the fate and effects of their transformation products. The nitro groups of energetic compounds are readily reduced to amines, and the resulting aromatic amines are subject to oxidation and coupling reactions. Manganese dioxide (MnO2) is a common environmental oxidant and model system for kinetic studies of aromatic amine oxidation. In this study, a training set of new and previously reported kinetic data for the oxidation of model and energetic-derived aromatic amines was assembled and subjected to correlation analysis against descriptor variables that ranged from general purpose [Hammett σ constants (σ(-)), pKas of the amines, and energies of the highest occupied molecular orbital (EHOMO)] to specific for the likely rate-limiting step [one-electron oxidation potentials (Eox)]. The selection of calculated descriptors (pKa, EHOMO, and Eox) was based on validation with experimental data. All of the correlations gave satisfactory quantitative structure-activity relationships (QSARs), but they improved with the specificity of the descriptor. The scope of correlation analysis was extended beyond MnO2 to include literature data on aromatic amine oxidation by other environmentally relevant oxidants (ozone, chlorine dioxide, and phosphate and carbonate radicals) by correlating relative rate constants (normalized to 4-chloroaniline) to EHOMO (calculated with a modest level of theory).
NASA Astrophysics Data System (ADS)
Lucas, G.; Love, J. J.; Kelbert, A.; Bedrosian, P.; Rigler, E. J.
2017-12-01
Space weather induces significant geoelectric fields within Earth's subsurface that can adversely affect electric power grids. The complex interaction between space weather and the solid Earth has traditionally been approached with the use of simple 1-D impedance functions relating the inducing magnetic field to the induced geoelectric field. Ongoing data collection through the NSF EarthScope program has produced measured impedance data across much of the continental US. In this work, impedance data are convolved with magnetic field variations, obtained from USGS magnetic observatories, during a geomagnetic storm. This convolution produces geoelectric fields within the earth. These geoelectric fields are then integrated across power transmission lines to determine the voltage generated within each power line as a function of time during a geomagnetic storm. The voltages generated within the electric power grid will be shown for several historic geomagnetic storms. The estimated voltages calculated from 1-D and 3-D impedances differ by more than 100 V across some transmission lines. In combination with grounding resistance data and network topology, these voltage estimates can be utilized by power companies to estimate geomagnetically-induced currents throughout the network. These voltage estimates can provide information on which power lines are most vulnerable to geomagnetic storms, and assist power grid companies investigating where to install additional protections within their grid.
Validity evidence for the Simulated Colonoscopy Objective Performance Evaluation scoring system.
Trinca, Kristen D; Cox, Tiffany C; Pearl, Jonathan P; Ritter, E Matthew
2014-02-01
Low-cost, objective systems to assess and train endoscopy skills are needed. The aim of this study was to evaluate the ability of Simulated Colonoscopy Objective Performance Evaluation to assess the skills required to perform endoscopy. Thirty-eight subjects were included in this study, all of whom performed 4 tasks. The scoring system measured performance by calculating precision and efficiency. Data analysis assessed the relationship between colonoscopy experience and performance on each task and the overall score. Endoscopic trainees' Simulated Colonoscopy Objective Performance Evaluation scores correlated significantly with total colonoscopy experience (r = .61, P = .003) and experience in the past 12 months (r = .63, P = .002). Significant differences were seen among practicing endoscopists, nonendoscopic surgeons, and trainees (P < .0001). When the 4 tasks were analyzed, each showed significant correlation with colonoscopy experience (scope manipulation, r = .44, P = .044; tool targeting, r = .45, P = .04; loop management, r = .47, P = .032; mucosal inspection, r = .65, P = .001) and significant differences in performance between the endoscopist groups, except for mucosal inspection (scope manipulation, P < .0001; tool targeting, P = .002; loop management, P = .0008; mucosal inspection, P = .27). Simulated Colonoscopy Objective Performance Evaluation objectively assesses the technical skills required to perform endoscopy and shows promise as a platform for proficiency-based skills training. Published by Elsevier Inc.
Life cycle assessment of thermal waste-to-energy technologies: review and recommendations.
Astrup, Thomas Fruergaard; Tonini, Davide; Turconi, Roberto; Boldrin, Alessio
2015-03-01
Life cycle assessment (LCA) has been used extensively within the recent decade to evaluate the environmental performance of thermal Waste-to-Energy (WtE) technologies: incineration, co-combustion, pyrolysis and gasification. A critical review was carried out involving 250 individual case-studies published in 136 peer-reviewed journal articles within 1995 and 2013. The studies were evaluated with respect to critical aspects such as: (i) goal and scope definitions (e.g. functional units, system boundaries, temporal and geographic scopes), (ii) detailed technology parameters (e.g. related to waste composition, technology, gas cleaning, energy recovery, residue management, and inventory data), and (iii) modeling principles (e.g. energy/mass calculation principles, energy substitution, inclusion of capital goods and uncertainty evaluation). Very few of the published studies provided full and transparent descriptions of all these aspects, in many cases preventing an evaluation of the validity of results, and limiting applicability of data and results in other contexts. The review clearly suggests that the quality of LCA studies of WtE technologies and systems including energy recovery can be significantly improved. Based on the review, a detailed overview of assumptions and modeling choices in existing literature is provided in conjunction with practical recommendations for state-of-the-art LCA of Waste-to-Energy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cox, S; Powell, C; Carter, B; Hurt, C; Mukherjee, Somnath; Crosby, Thomas David Lewis
2016-07-12
Malnutrition is common in oesophageal cancer. We aimed to identify nutritional prognostic factors and survival outcomes associated with nutritional intervention in the SCOPE1 (Study of Chemoradiotherapy in OesoPhageal Cancer with or without Erbitux) trial. Two hundred and fifty eight patients were randomly allocated to definitive chemoradiotherapy (dCRT) +/- cetuximab. Nutritional Risk Index (NRI) scores were calculated; NRI<100 identified patients at risk of malnutrition. Nutritional intervention included dietary advice, oral supplementation or major intervention (enteral feeding/tube placement). Univariable and multivariable analyses using Cox proportional hazard modelling were conducted. At baseline NRI<100 strongly predicted for reduced overall survival (hazard ratio (HR) 12.45, 95% CI 5.24-29.57; P<0.001). Nutritional intervention improved survival if provided at baseline (dietary advice (HR 0.12, P=0.004), oral supplementation (HR 0.13, P<0.001) or major intervention (HR 0.13, P=0.003)), but not if provided later in the treatment course. Cetuximab patients receiving major nutritional intervention had worse outcomes compared with controls (13 vs 28 months, P=0.003). Pre-treatment assessment and correction of malnutrition may improve survival outcomes in oesophageal cancer patients treated with dCRT. Nutritional Risk Index is a simple and objective screening tool to identify patients at risk of malnutrition.
Energetic cost of sexual attractiveness: ultrasonic advertisement in wax moths.
Reinhold; Greenfield; Jang; Broce
1998-04-01
Pair formation in the lesser wax moth, Achroia grisella (Lepidoptera: Pyralidae), is initiated by male ultrasonic signals that attract receptive females. Individual males vary in attractiveness to females, and the most attractive males are distinguished by exaggeration of three signal characters: pulse rate, peak amplitude and asynchrony interval (temporal separation between pulses generated by movements of the left and right wings during a given wing upstroke or downstroke). Using flow-through respirometry, we measured the resting and signalling metabolic rates of males whose relative attractiveness was known. Acoustic recordings and metabolic measurements were made simultaneously, and we calculated net metabolic rates and factorial metabolic scopes as measures for the energetic cost of signalling. On average, attractive males had higher net metabolic rates and factorial metabolic scopes than unattractive ones, but many unattractive males also had high values. Thus, high expenditure of energy on signalling is necessary but not sufficient for attractiveness. This may result because only one of the three signal characters critical for female preference, pulse rate, is correlated with energy expenditure. Although the results are consistent with the good genes model of sexual selection, they do not conflict with other indirect or direct mechanisms of female choice. Copyright 1998 The Association for the Study of Animal Behaviour. Copyright 1998 The Association for the Study of Animal Behaviour.
Projected Standard on neutron skyshine. [Skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, R.M.; Williams, D.S.
1987-07-01
Current interest in neutron skyshine arises from the application of dry fuel handling and storage techniques at reactor sites, at the proposed monitored retrievable storage facility and at other facilities being considered as part of the civilian radioactive waste management programs. The chairman of Standards Subcommittee ANS-6, Radiation Protection and Shielding, has requested that a work group be formed to characterize the neutron skyshine problem and, if necessary, prepare a draft Standard. The work group is comprised of representatives of storage cask vendors, architect engineering firms, nuclear utilities, the academic community and staff members of national laboratories and government agencies.more » The purpose of this presentation summary is to describe the activities of the work group and the scope and contents of the projected Standard, ANS-6.6.2, ''Calculation and Measurement of Direct and Scattered Neutron Radiation from Nuclear Power Operations.'' The specific source under consideration by the work group is an array of dry fuel casks located at a reactor site. However, it is recognized that the scope of the standard should be broad enough to encompass other neutron sources. The Standard will define appropriate methodology for properly characterizing the neutron dose due to skyshine. This dose characterization is necessary, for example, in demonstrating compliance with pertinent regulatory criteria.« less
Scoping analysis of the Advanced Test Reactor using SN2ND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolters, E.; Smith, M.; SC)
2012-07-26
A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less
Clinical calculators in hospital medicine: Availability, classification, and needs.
Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2016-09-01
Clinical calculators are widely used in modern clinical practice, but are not generally applied to electronic health record (EHR) systems. Important barriers to the application of these clinical calculators into existing EHR systems include the need for real-time calculation, human-calculator interaction, and data source requirements. The objective of this study was to identify, classify, and evaluate the use of available clinical calculators for clinicians in the hospital setting. Dedicated online resources with medical calculators and providers of aggregated medical information were queried for readily available clinical calculators. Calculators were mapped by clinical categories, mechanism of calculation, and the goal of calculation. Online statistics from selected Internet resources and clinician opinion were used to assess the use of clinical calculators. One hundred seventy-six readily available calculators in 4 categories, 6 primary specialties, and 40 subspecialties were identified. The goals of calculation included prediction, severity, risk estimation, diagnostic, and decision-making aid. A combination of summation logic with cutoffs or rules was the most frequent mechanism of computation. Combined results, online resources, statistics, and clinician opinion identified 13 most utilized calculators. Although not an exhaustive list, a total of 176 validated calculators were identified, classified, and evaluated for usefulness. Most of these calculators are used for adult patients in the critical care or internal medicine settings. Thirteen of 176 clinical calculators were determined to be useful in our institution. All of these calculators have an interface for manual input. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Assessment of Spanish Panel Reactive Antibody Calculator and Potential Usefulness.
Asensio, Esther; López-Hoyos, Marcos; Romón, Íñigo; Ontañón, Jesús; San Segundo, David
2017-01-01
The calculated panel reactive of antibodies (cPRAs) necessary for kidney donor-pair exchange and highly sensitized programs are estimated using different panel reactive antibody (PRA) calculators based on big enough samples in Eurotransplant (EUTR), United Network for Organ Sharing (UNOS), and Canadian Transplant Registry (CTR) websites. However, those calculators can vary depending on the ethnic they are applied. Here, we develop a PRA calculator used in the Spanish Program of Transplant Access for Highly Sensitized patients (PATHI) and validate it with EUTR, UNOS, and CTR calculators. The anti-human leukocyte antigen (HLA) antibody profile of 42 sensitized patients on waiting list was defined, and cPRA was calculated with different PRA calculators. Despite different allelic frequencies derived from population differences in donor panel from each calculator, no differences in cPRA between the four calculators were observed. The PATHI calculator includes anti-DQA1 antibody profiles in cPRA calculation; however, no improvement in total cPRA calculation of highly sensitized patients was demonstrated. The PATHI calculator provides cPRA results comparable with those from EUTR, UNOS, and CTR calculators and serves as a tool to develop valid calculators in geographical and ethnic areas different from Europe, USA, and Canada.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruger, R.
The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less
NASA Astrophysics Data System (ADS)
Yamaguchi, Kizashi; Shoji, Mitsuo; Isobe, Hiroshi; Yamanaka, Shusuke; Kawakami, Takashi; Yamada, Satoru; Katouda, Michio; Nakajima, Takahito
2018-03-01
Possible mechanisms for water cleavage in oxygen evolving complex (OEC) of photosystem II (PSII) have been investigated based on broken-symmetry (BS) hybrid DFT (HDFT)/def2 TZVP calculations in combination with available XRD, XFEL, EXAFS, XES and EPR results. The BS HDFT and the experimental results have provided basic concepts for understanding of chemical bonds of the CaMn4O5 cluster in the catalytic site of OEC of PSII for elucidation of the mechanism of photosynthetic water cleavage. Scope and applicability of the hybrid DFT (HDFT) methods have been examined in relation to relative stabilities of possible nine intermediates such as Mn-hydroxide, Mn-oxo, Mn-peroxo, Mn-superoxo, etc., in order to understand the O-O (O-OH) bond formation in the S3 and/or S4 states of OEC of PSII. The relative stabilities among these intermediates are variable, depending on the weight of the Hartree-Fock exchange term of HDFT. The Mn-hydroxide, Mn-oxo and Mn-superoxo intermediates are found to be preferable in the weak, intermediate and strong electron correlation regimes, respectively. Recent different serial femtosecond X-ray (SFX) results in the S3 state are investigated based on the proposed basic concepts under the assumption of different water-insertion steps for water cleavage in the Kok cycle. The observation of water insertion in the S3 state is compatible with previous large-scale QM/MM results and previous theoretical proposal for the chemical equilibrium mechanism in the S3 state . On the other hand, the no detection of water insertion in the S3 state based on other SFX results is consistent with previous proposal of the O-OH (or O-O) bond formation in the S4 state . Radical coupling and non-adiabatic one-electron transfer (NA-OET) mechanisms for the OO-bond formation are examined using the energy diagrams by QM calculations and by QM(UB3LYP)/MM calculations . Possible reaction pathways for the O-O and O-OH bond formations are also investigated based on two water-inlet pathways for oxygen evolution in OEC of PSII. Future perspectives are discussed in relation to post HDFT calculations of the energy diagrams for elucidation of the mechanism of water oxidation in OEC of PSII.
NASA Astrophysics Data System (ADS)
Gülşen, Esra; Kurtulus, Bedri; Necati Yaylim, Tolga; Avsar, Ozgur
2017-04-01
In groundwater studies, quantification and detection of fluid flows in borehole is an important part of assessment aquifer characteristic at different depths. Monitoring wells disturbs the natural flow field and this disturbance creates different flow paths to an aquifer. Vertical flow fluid analyses are one of the important techniques to deal with the detection and quantification of these vertical flows in borehole/monitoring wells. Liwa region is located about 146 km to the south west of Abu Dhabi city and about 36 km southwest of Madinat Zayed. SWSR (Strategic Water Storage & Recovery Project) comprises three Schemes (A, B and C) and each scheme contains an infiltration basin in the center, 105 recovery wells, 10 clusters and each cluster comprises 3 monitoring wells with different depths; shallow ( 50 m), intermediate ( 75 m) and deep ( 100 m). The scope of this study is to calculate the transmissivity values at different depth and evaluate the Fluid Flow Log (FFL) data for Scheme A (105 recovery wells) in order to understand the aquifer characteristic at different depths. The transmissivity values at different depth levels are calculated using Razack and Huntley (1991) equation for vertical flow rates of 30 m3 /h, 60 m3 /h, 90 m3 /h, 120 m3 /h and then Empirical Bayesian Kriging is used for interpolation in Scheme A using ArcGIS 10.2 software. FFL are drawn by GeODin software. Derivative analysis of fluid flow data are done by Microsoft Office: Excel software. All statistical analyses are calculated by IBMSPSS software. The interpolation results show that the transmissivity values are higher at the top of the aquifer. In other word, the aquifer is found more productive at the upper part of the Liwa aquifer. We are very grateful for financial support and providing us the data to ZETAS Dubai Inc.
NASA Astrophysics Data System (ADS)
Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.
2017-11-01
Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
Effects of the soil pore network architecture on the soil's physical functionalities
NASA Astrophysics Data System (ADS)
Smet, Sarah; Beckers, Eléonore; Léonard, Angélique; Degré, Aurore
2017-04-01
The soil fluid movement's prediction is of major interest within an agricultural or environmental scope because many processes depend ultimately on the soil fluids dynamic. It is common knowledge that the soil microscopic pore network structure governs the inner-soil convective fluids flow. There isn't, however, a general methodthat consider the pore network structure as a variable in the prediction of thecore scale soil's physical functionalities. There are various possible representations of the microscopic pore network: sample scale averaged structural parameters, extrapolation of theoretic pore network, or use of all the information available by modeling within the observed pore network. Different representations implydifferent analyzing methodologies. To our knowledge, few studies have compared the micro-and macroscopic soil's characteristics for the same soil core sample. The objective of our study is to explore the relationship between macroscopic physical properties and microscopic pore network structure. The saturated hydraulic conductivity, the air permeability, the retention curve, and others classical physical parameters were measured for ten soil samples from an agricultural field. The pore network characteristics were quantified through the analyses of X-ray micro-computed tomographic images(micro-CT system Skyscan-1172) with a voxel size of 22 µm3. Some of the first results confirmed what others studies had reported. Then, the comparison between macroscopic properties and microscopic parameters suggested that the air movements depended mostly on the pore connectivity and tortuosity than on the total porosity volume. We have also found that the fractal dimension calculated from the X-ray images and the fractal dimension calculated from the retention curve were significantly different. Our communication will detailthose results and discuss the methodology: would the results be similar with a different voxel size? What are the calculated and measured parameters uncertainties? Sarah Smet, as a research fellow, acknowledges the support of the National Fund for Scientific Research (Brussels, Belgium).
Process-scale modeling of elevated wintertime ozone in Wyoming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotamarthi, V. R.; Holdridge, D. J.; Environmental Science Division
2007-12-31
Measurements of meteorological variables and trace gas concentrations, provided by the Wyoming Department of Environmental Quality for Daniel, Jonah, and Boulder Counties in the state of Wyoming, were analyzed for this project. The data indicate that highest ozone concentrations were observed at temperatures of -10 C to 0 C, at low wind speeds of about 5 mph. The median values for nitrogen oxides (NOx) during these episodes ranged between 10 ppbv and 20 ppbv (parts per billion by volume). Measurements of volatile organic compounds (VOCs) during these periods were insufficient for quantitative analysis. The few available VOCs measurements indicated unusuallymore » high levels of alkanes and aromatics and low levels of alkenes. In addition, the column ozone concentration during one of the high-ozone episodes was low, on the order of 250 DU (Dobson unit) as compared to a normal column ozone concentration of approximately 300-325 DU during spring for this region. Analysis of this observation was outside the scope of this project. The data analysis reported here was used to establish criteria for making a large number of sensitivity calculations through use of a box photochemical model. Two different VOCs lumping schemes, RACM and SAPRC-98, were used for the calculations. Calculations based on this data analysis indicated that the ozone mixing ratios are sensitive to (a) surface albedo, (b) column ozone, (c) NOx mixing ratios, and (d) available terminal olefins. The RACM model showed a large response to an increase in lumped species containing propane that was not reproduced by the SAPRC scheme, which models propane as a nearly independent species. The rest of the VOCs produced similar changes in ozone in both schemes. In general, if one assumes that measured VOCs are fairly representative of the conditions at these locations, sufficient precursors might be available to produce ozone in the range of 60-80 ppbv under the conditions modeled.« less
NASA Astrophysics Data System (ADS)
Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.
2018-04-01
This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
NASA Technical Reports Server (NTRS)
Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.
2014-01-01
A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties
Predictability and hierarchy in Drosophila behavior.
Berman, Gordon J; Bialek, William; Shaevitz, Joshua W
2016-10-18
Even the simplest of animals exhibit behavioral sequences with complex temporal dynamics. Prominent among the proposed organizing principles for these dynamics has been the idea of a hierarchy, wherein the movements an animal makes can be understood as a set of nested subclusters. Although this type of organization holds potential advantages in terms of motion control and neural circuitry, measurements demonstrating this for an animal's entire behavioral repertoire have been limited in scope and temporal complexity. Here, we use a recently developed unsupervised technique to discover and track the occurrence of all stereotyped behaviors performed by fruit flies moving in a shallow arena. Calculating the optimally predictive representation of the fly's future behaviors, we show that fly behavior exhibits multiple time scales and is organized into a hierarchical structure that is indicative of its underlying behavioral programs and its changing internal states.
Bethe-Boltzmann hydrodynamics and spin transport in the XXZ chain
NASA Astrophysics Data System (ADS)
Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2018-01-01
Quantum integrable systems, such as the interacting Bose gas in one dimension and the XXZ quantum spin chain, have an extensive number of local conserved quantities that endow them with exotic thermalization and transport properties. We discuss recently introduced hydrodynamic approaches for such integrable systems from the viewpoint of kinetic theory and extend the previous works by proposing a numerical scheme to solve the hydrodynamic equations for finite times and arbitrary locally equilibrated initial conditions. We then discuss how such methods can be applied to describe nonequilibrium steady states involving ballistic heat and spin currents. In particular, we show that the spin Drude weight in the XXZ chain, previously accessible only by rigorous techniques of limited scope or controversial thermodynamic Bethe ansatz arguments, may be evaluated from hydrodynamics in very good agreement with density-matrix renormalization group calculations.
Flow-induced Vibration of SSME Main Injector Liquid-oxygen Posts
NASA Technical Reports Server (NTRS)
Chen, S. S.; Jendrzejczyk, J. A.; Wambsganss, M. W.
1985-01-01
The liquid-oxygen (LOX) posts are exposed to hot hydrogen flowing over the tubes on its way to the combustion chamber. Fatigue cracking of some LOX posts was observed after test firing of the SSMEs. A current design modification consists of attaching impingement shields to the LOX posts in the outer row. The modification improved the vibration/fatigue problem of the LOX posts, but resulted in an increased pressure drop that ultimately shortened the life expectancy of other components. A fundamental study of vibration of the LOX posts was initiated to understand the flow-induced vibration problem and to develop techniques to avoid detrimental vibrational effects with the overall objective of improving engine life. This effort, including an assessment of the problem, scoping calculation and experiment, and a work plan for an integrated theoretical/experimental study of the problem is summarized.
Automated UHPLC separation of 10 pharmaceutical compounds using software-modeling.
Zöldhegyi, A; Rieger, H-J; Molnár, I; Fekhretdinova, L
2018-03-20
Human mistakes are still one of the main reasons of underlying regulatory affairs that in a compliance with FDA's Data Integrity and Analytical Quality by Design (AQbD) must be eliminated. To develop smooth, fast and robust methods that are free of human failures, a state-of-the-art automation was presented. For the scope of this study, a commercial software (DryLab) and a model mixture of 10 drugs were subjected to testing. Following AQbD-principles, the best available working point was selected and conformational experimental runs, i.e. the six worst cases of the conducted robustness calculation, were performed. Simulated results were found to be in excellent agreement with the experimental ones, proving the usefulness and effectiveness of an automated, software-assisted analytical method development. Copyright © 2018. Published by Elsevier B.V.
Tropical Ecosystems and Ecological Concepts
NASA Astrophysics Data System (ADS)
Osborne, Patrick L.
2000-09-01
Over one third of the earth's terrestrial surface is situated in the tropics, with environments ranging from hot deserts to tropical rain forests. This introductory textbook, aimed at students studying tropical ecology, provides a comprehensive guide to the major tropical biomes and is unique in its balanced coverage of both aquatic and terrestrial systems. The volume considers the human ecological dimension, covering issues such as population growth, urbanization, agriculture and fisheries, natural resource use, and pollution. It is international in scope and addresses global issues such as conservation of biodiversity, climate change, and the concept of ecological sustainability. The text is supported throughout by boxes containing supplementary material on a range of topics and organisms, mathematical concepts and calculations, and is enlivened with clear line diagrams, maps, and photographs. A cross-referenced glossary, extensive bibliography, and comprehensive index are included as further aids to study.
Cascade Synthesis of Five-Membered Lactones using Biomass-Derived Sugars as Carbon Nucleophiles.
Yamaguchi, Sho; Matsuo, Takeaki; Motokura, Ken; Miyaji, Akimitsu; Baba, Toshihide
2016-06-06
We report the cascade synthesis of five-membered lactones from a biomass-derived triose sugar, 1,3-dihydroxyacetone, and various aldehydes. This achievement provides a new synthetic strategy to generate a wide range of valuable compounds from a single biomass-derived sugar. Among several examined Lewis acid catalysts, homogeneous tin chloride catalysts exhibited the best performance to form carbon-carbon bonds. The scope and limitations of the synthesis of five-membered lactones using aldehyde compounds are investigated. The cascade reaction led to high product selectivity as well as diastereoselectivity, and the mechanism leading to the diastereoselectivity was discussed based on isomerization experiments and density functional theory (DFT) calculations. The present results are expected to support new approaches for the efficient utilization of biomass-derived sugars. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godwin, Aaron
The scope will be limited to analyzing the effect of the EFC within the system and how one improperly installed coupling affects the rest of the HPFL system. The discussion will include normal operations, impaired flow, and service interruptions. Normal operations are defined as two-way flow to buildings. Impaired operations are defined as a building that only has one-way flow being provided to the building. Service interruptions will be when a building does not have water available to it. The project will look at the following aspects of the reliability of the HPFL system: mean time to failure (MTTF) ofmore » EFCs, mean time between failures (MTBF), series system models, and parallel system models. These calculations will then be used to discuss the reliability of the system when one of the couplings fails. Compare the reliability of two-way feeds versus one-way feeds.« less
Interference problems for nongeostationary satellites
NASA Technical Reports Server (NTRS)
Sollfrey, W.
1984-01-01
The interference problems faced by nongeostationary satellites may be of major significance. A general discussion indicates the scope of the problems and describes several configurations of importance. Computer programs are described, which are employed by NASA/JPL and the U.S. Air Force Satellite Control Facility to provide interference-free scheduling of commands and data transmission. Satellite system mission planners are not concerned with the precise prediction of interference episodes, but rather with the expected total amount of interference, the mean and maximum duration of events, and the mean spacing between episodes. The procedures in the theory of probability developed by the author which permit calculation of such quantities are described and applied to several real cases. It may be anticipated that the problems will become steadily worse in the future as more and more data transmissions attempt to occupy the same frequency band.
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Rummer, Jodie L.; Couturier, Christine S.; Stecyk, Jonathan A. W.; Gardiner, Naomi M.; Kinch, Jeff P.; Nilsson, Göran E.; Munday, Philip L.
2015-01-01
Equatorial populations of marine species are predicted to be most impacted by global warming because they could be adapted to a narrow range of temperatures in their local environment. We investigated the thermal range at which aerobic metabolic performance is optimum in equatorial populations of coral reef fish in northern Papua New Guinea. Four species of damsel fishes and two species of cardinal fishes were held for 14d at 29, 31, 33, and 34°C, which incorporated their existing thermal range (29–31°C) as well as projected increases in ocean surface temperatures of up to 3°C by the end of this century. Resting and maximum oxygen consumption rates were measured for each species at each temperature and used to calculate the thermal reaction norm of aerobic scope. Our results indicate that one of the six species, Chromisatripectoralis, is already living above its thermal optimum of 29°C. The other five species appeared to be living close to their thermal optima (approximately 31°C). Aerobic scope was significantly reduced in all species, and approached zero for two species at 3°C above current-day temperatures. One species was unable to survive even short-term exposure to 34°C. Our results indicate that low-latitude reef fish populations are living close to their thermal optima and may be more sensitive to ocean warming than higher-latitude populations. Even relatively small temperature increases (2–3°C) could result in population declines and potentially redistribution of equatorial species to higher latitudes if adaptation cannot keep pace. PMID:24281840
Rummer, Jodie L; Couturier, Christine S; Stecyk, Jonathan A W; Gardiner, Naomi M; Kinch, Jeff P; Nilsson, Göran E; Munday, Philip L
2014-04-01
Equatorial populations of marine species are predicted to be most impacted by global warming because they could be adapted to a narrow range of temperatures in their local environment. We investigated the thermal range at which aerobic metabolic performance is optimum in equatorial populations of coral reef fish in northern Papua New Guinea. Four species of damselfishes and two species of cardinal fishes were held for 14 days at 29, 31, 33, and 34 °C, which incorporated their existing thermal range (29-31 °C) as well as projected increases in ocean surface temperatures of up to 3 °C by the end of this century. Resting and maximum oxygen consumption rates were measured for each species at each temperature and used to calculate the thermal reaction norm of aerobic scope. Our results indicate that one of the six species, Chromis atripectoralis, is already living above its thermal optimum of 29 °C. The other five species appeared to be living close to their thermal optima (ca. 31 °C). Aerobic scope was significantly reduced in all species, and approached zero for two species at 3 °C above current-day temperatures. One species was unable to survive even short-term exposure to 34 °C. Our results indicate that low-latitude reef fish populations are living close to their thermal optima and may be more sensitive to ocean warming than higher-latitude populations. Even relatively small temperature increases (2-3 °C) could result in population declines and potentially redistribution of equatorial species to higher latitudes if adaptation cannot keep pace. © 2013 John Wiley & Sons Ltd.
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, S; Sato, M; Tachibana, H
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less
The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.
ERIC Educational Resources Information Center
Duke, B. J.; O'Leary, Brian
1988-01-01
Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Flow dynamics of a spiral-groove dry-gas seal
NASA Astrophysics Data System (ADS)
Wang, Bing; Zhang, Huiqiang; Cao, Hongjun
2013-01-01
The dry-gas seal has been widely used in different industries. With increased spin speed of the rotator shaft, turbulence occurs in the gas film between the stator and rotor seal faces. For the micro-scale flow in the gas film and grooves, turbulence can change the pressure distribution of the gas film. Hence, the seal performance is influenced. However, turbulence effects and methods for their evaluation are not considered in the existing industrial designs of dry-gas seal. The present paper numerically obtains the turbulent flow fields of a spiral-groove dry-gas seal to analyze turbulence effects on seal performance. The direct numerical simulation (DNS) and Reynolds-averaged Navier-Stokes (RANS) methods are utilized to predict the velocity field properties in the grooves and gas film. The key performance parameter, open force, is obtained by integrating the pressure distribution, and the obtained result is in good agreement with the experimental data of other researchers. Very large velocity gradients are found in the sealing gas film because of the geometrical effects of the grooves. Considering turbulence effects, the calculation results show that both the gas film pressure and open force decrease. The RANS method underestimates the performance, compared with the DNS. The solution of the conventional Reynolds lubrication equation without turbulence effects suffers from significant calculation errors and a small application scope. The present study helps elucidate the physical mechanism of the hydrodynamic effects of grooves for improving and optimizing the industrial design or seal face pattern of a dry-gas seal.
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
NASA Astrophysics Data System (ADS)
Reahard, R. R.; Mitchell, B. S.; Childs, L. M.; Billiot, A.; Brown, T.
2009-12-01
The Chandeleur Islands are the first line of defense against tropical storms and hurricanes for coastal Louisiana. They provide habitats for bird species and are a national wildlife refuge; however, they are eroding and transgressing at an alarming rate. In 1998, Hurricane Georges caused severe damage to the chain, prompting restoration and monitoring efforts by both Federal and State agencies. Since then, storm events have steadily diminished the condition of the islands. Quantification of shoreline erosion, vegetation, and land loss, from 1979 to 2009, was calculated through the analysis of imagery from Landsat 2-4 Multispectral Scanner, Landsat 4 & 5 Thematic Mapper, and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensors. QuickBird imagery was used to validate the accuracy of these results. In addition, this study presents an application of Moderate Resolution Imaging Spectroradiometer (MODIS) data to assist in tracking the landward migration of the Chandeleur Islands. The use of near infrared reflectance calculated from MOD09 surface reflectance data from 2000 to 2008 was analyzed using the Time Series Product Tool. The scope of this project includes not only assessments of the tropical cyclonic events during this time period, but also the effects of tides, winds, and cold fronts on the spatial extent of the islands. Partnering organizations, such as the Pontchartrain Institute for Environmental Sciences, will utilize those results in an effort to better monitor and address the continual change of the Chandeleur Islands.
Structure-Activity Relationships for Rates of Aromatic Amine Oxidation by Manganese Dioxide
Salter-Blanc, Alexandra J.; Bylaska, Eric J.; Lyon, Molly A.; ...
2016-04-13
New energetic compounds are designed to minimize their potential environmental impacts, which includes their transformation and the fate and effects of their transformation products. The nitro groups of energetic compounds are readily reduced to amines, and the resulting aromatic amines are subject to oxidation and coupling reactions. Manganese dioxide (MnO 2) is a common environmental oxidant and model system for kinetic studies of aromatic amine oxidation. Here in this study, a training set of new and previously reported kinetic data for the oxidation of model and energetic-derived aromatic amines was assembled and subjected to correlation analysis against descriptor variables that ranged from general purpose [Hammettmore » $$\\sigma$$ constants ($$\\sigma^-$$), pK as of the amines, and energies of the highest occupied molecular orbital (E HOMO)] to specific for the likely rate-limiting step [one-electron oxidation potentials (E ox)]. The selection of calculated descriptors (pK a), E HOMO, and E ox) was based on validation with experimental data. All of the correlations gave satisfactory quantitative structure-activity relationships (QSARs), but they improved with the specificity of the descriptor. The scope of correlation analysis was extended beyond MnO 2 to include literature data on aromatic amine oxidation by other environmentally relevant oxidants (ozone, chlorine dioxide, and phosphate and carbonate radicals) by correlating relative rate constants (normalized to 4-chloroaniline) to E HOMO (calculated with a modest level of theory).« less
Sound Power Estimation for Beam and Plate Structures Using Polyvinylidene Fluoride Films as Sensors
Mao, Qibo; Zhong, Haibing
2017-01-01
The theory for calculation and/or measurement of sound power based on the classical velocity-based radiation mode (V-mode) approach is well established for planar structures. However, the current V-mode theory is limited in scope in that it can only be applied to conventional motion sensors (i.e., accelerometers). In this study, in order to estimate the sound power of vibrating beam and plate structure by using polyvinylidene fluoride (PVDF) films as sensors, a PVDF-based radiation mode (C-mode) approach concept is introduced to determine the sound power radiation from the output signals of PVDF films of the vibrating structure. The proposed method is a hybrid of vibration measurement and numerical calculation of C-modes. The proposed C-mode approach has the following advantages: (1) compared to conventional motion sensors, the PVDF films are lightweight, flexible, and low-cost; (2) there is no need for special measuring environments, since the proposed method does not require the measurement of sound fields; (3) In low frequency range (typically with dimensionless frequency kl < 4), the radiation efficiencies of the C-modes fall off very rapidly with increasing mode order, furthermore, the shapes of the C-modes remain almost unchanged, which means that the computation load can be significantly reduced due to the fact only the first few dominant C-modes are involved in the low frequency range. Numerical simulations and experimental investigations were carried out to verify the accuracy and efficiency of the proposed method. PMID:28509870
Generic Degraded Congiguration Probability Analysis for DOE Codisposal Waste Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.F.A. Deng; M. Saglam; L.J. Gratton
2001-05-23
In accordance with the technical work plan, ''Technical Work Plan For: Department of Energy Spent Nuclear Fuel Work Packages'' (CRWMS M&O 2000c), this Analysis/Model Report (AMR) is developed for the purpose of screening out degraded configurations for U.S. Department of Energy (DOE) spent nuclear fuel (SNF) types. It performs the degraded configuration parameter and probability evaluations of the overall methodology specified in the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000, Section 3) to qualifying configurations. Degradation analyses are performed to assess realizable parameter ranges and physical regimes for configurations. Probability calculations are then performed for configurations characterized by k{submore » eff} in excess of the Critical Limit (CL). The scope of this document is to develop a generic set of screening criteria or models to screen out degraded configurations having potential for exceeding a criticality limit. The developed screening criteria include arguments based on physical/chemical processes and probability calculations and apply to DOE SNF types when codisposed with the high-level waste (HLW) glass inside a waste package. The degradation takes place inside the waste package and is long after repository licensing has expired. The emphasis of this AMR is on degraded configuration screening and the probability analysis is one of the approaches used for screening. The intended use of the model is to apply the developed screening criteria to each DOE SNF type following the completion of the degraded mode criticality analysis internal to the waste package.« less
Structure-Activity Relationships for Rates of Aromatic Amine Oxidation by Manganese Dioxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salter-Blanc, Alexandra J.; Bylaska, Eric J.; Lyon, Molly A.
New energetic compounds are designed to minimize their potential environmental impacts, which includes their transformation and the fate and effects of their transformation products. The nitro groups of energetic compounds are readily reduced to amines, and the resulting aromatic amines are subject to oxidation and coupling reactions. Manganese dioxide (MnO 2) is a common environmental oxidant and model system for kinetic studies of aromatic amine oxidation. Here in this study, a training set of new and previously reported kinetic data for the oxidation of model and energetic-derived aromatic amines was assembled and subjected to correlation analysis against descriptor variables that ranged from general purpose [Hammettmore » $$\\sigma$$ constants ($$\\sigma^-$$), pK as of the amines, and energies of the highest occupied molecular orbital (E HOMO)] to specific for the likely rate-limiting step [one-electron oxidation potentials (E ox)]. The selection of calculated descriptors (pK a), E HOMO, and E ox) was based on validation with experimental data. All of the correlations gave satisfactory quantitative structure-activity relationships (QSARs), but they improved with the specificity of the descriptor. The scope of correlation analysis was extended beyond MnO 2 to include literature data on aromatic amine oxidation by other environmentally relevant oxidants (ozone, chlorine dioxide, and phosphate and carbonate radicals) by correlating relative rate constants (normalized to 4-chloroaniline) to E HOMO (calculated with a modest level of theory).« less
Seismic analysis for translational failure of landfills with retaining walls.
Feng, Shi-Jin; Gao, Li-Ya
2010-11-01
In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
NASA Astrophysics Data System (ADS)
Timmermans, Joris; Gastellu-Etchegorry, Jean Philippe; van der Tol, Christiaan; Verhoef, Wout; Vekerdy, Zoltan; Su, Zhongbo
2017-04-01
Accurate estimation of the radiative transfer (RT) over vegetation is the corner stone of agricultural and hydrological remote sensing applications. Present remote sensing sensors mostly use traditional optical, thermal and microwave observations. However with these traditional observations characterization of the light efficiency and photosynthetic rate can only be accomplished indirectly. A promising new method of observing these processes is by using the fluorescent emitted radiation. This approach was recently highlighted due to the selection of the FLEX sensor as a future Earth Explorer by the European Space agency (ESA). Several modelling activities have been undertaken to better understand the technical feasibilities of this sensor. Within these studies, the SCOPE model has been chosen as the baseline algorithm. This model combines a detailed RT description of the canopy, using a discrete version of the SAIL model, with a description of photosynthetic processes (by use of the Farquhar/Ball-Berry model). Consequently, this model is capable of simulating simultaneously the biophysical processes and jointly the fluorescent, optical and thermal RT. The SAIL model however is a 1D RT model and consequently provides higher uncertainties with increasing vegetation structures. The main objective of this research is to investigate the limitations of the RT model component of the SCOPE model over complex canopies. In particular the aim of this research is to evaluate the validity for increasingly structural complex canopies', on the bidirectional reflectance distribution functions (BRDF) of these canopies. This was accomplished by evaluating the simulated outgoing radiation from SCOPE/SAIL against simulations of the DART 3D RT model. In total nine different scenarios were simulated with the DART RTM with increasing structural complexity, ranging from the simple 'Plot' scenario to the highly complex 'Multiple Crown' scenario. The canopy parameters are retrieved from a terrestrial laser scan of the Speulderbos in the Netherlands. The comparison between DART and SCOPE/SLC models showed a good match for the simple scenarios. Calculated rMSDs showed lower than 7.5% errors for crown coverage values lower than 0.87, with the Near-Hotspot viewing angles found to be the largest contributor to this deviation. For more complex scenarios (using Multiple Crowns), the comparison between SCOPE and DART showed mixed results. Good results were obtained for crown coverage values of 0.93, with rMSD (6.77% and 5.96%), lower than the defined threshold value, except near hotspot. For scenarios with crown coverages lower than 0.93 the rMSD were too large to validate the use of SCOPE model. When considering the Soil Leaf Canopy (SLC) model, an improved version of SAIL that considers the canopy clumping, better results were obtained for these complex scenarios, with good agreement for medium crown coverage values (0.93 and 0.87) with rMSD (6.33% and 5.99; 6.66% and 7.12%). This indicates that the radiative transfer model within SCOPE might be upgraded in the future.
Handbook of Industrial Engineering Equations, Formulas, and Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badiru, Adedeji B; Omitaomu, Olufemi A
The first handbook to focus exclusively on industrial engineering calculations with a correlation to applications, Handbook of Industrial Engineering Equations, Formulas, and Calculations contains a general collection of the mathematical equations often used in the practice of industrial engineering. Many books cover individual areas of engineering and some cover all areas, but none covers industrial engineering specifically, nor do they highlight topics such as project management, materials, and systems engineering from an integrated viewpoint. Written by acclaimed researchers and authors, this concise reference marries theory and practice, making it a versatile and flexible resource. Succinctly formatted for functionality, the bookmore » presents: Basic Math Calculations; Engineering Math Calculations; Production Engineering Calculations; Engineering Economics Calculations; Ergonomics Calculations; Facility Layout Calculations; Production Sequencing and Scheduling Calculations; Systems Engineering Calculations; Data Engineering Calculations; Project Engineering Calculations; and Simulation and Statistical Equations. It has been said that engineers make things while industrial engineers make things better. To make something better requires an understanding of its basic characteristics and the underlying equations and calculations that facilitate that understanding. To do this, however, you do not have to be computational experts; you just have to know where to get the computational resources that are needed. This book elucidates the underlying equations that facilitate the understanding required to improve design processes, continuously improving the answer to the age-old question: What is the best way to do a job?« less
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
ERIC Educational Resources Information Center
Cleary, David A.
2014-01-01
The usefulness of the JANAF tables is demonstrated with specific equilibrium calculations. An emphasis is placed on the nature of standard chemical potential calculations. Also, the use of the JANAF tables for calculating partition functions is examined. In the partition function calculations, the importance of the zero of energy is highlighted.
Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method
NASA Astrophysics Data System (ADS)
Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome
2018-03-01
Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.
Energy levels, oscillator strengths, and transition probabilities for sulfur-like scandium, Sc VI
NASA Astrophysics Data System (ADS)
El-Maaref, A. A.; Abou Halaka, M. M.; Saddeek, Yasser B.
2017-09-01
Energy levels, Oscillator strengths, and transition probabilities for sulfur-like scandium are calculated using CIV3 code. The calculations have been executed in an intermediate coupling scheme using Breit-Pauli Hamiltonian. The present calculations have been compared with the experimental data and other theoretical calculations. LANL code has been used to confirm the accuracy of the present calculations, where the calculations using CIV3 code agree well with the corresponding values by LANL code. The calculated energy levels and oscillator strengths are in reasonable agreement with the published experimental data and theoretical values. We have calculated lifetimes of some excited levels, as well.
SPACEBAR: Kinematic design by computer graphics
NASA Technical Reports Server (NTRS)
Ricci, R. J.
1975-01-01
The interactive graphics computer program SPACEBAR, conceived to reduce the time and complexity associated with the development of kinematic mechanisms on the design board, was described. This program allows the direct design and analysis of mechanisms right at the terminal screen. All input variables, including linkage geometry, stiffness, and applied loading conditions, can be fed into or changed at the terminal and may be displayed in three dimensions. All mechanism configurations can be cycled through their range of travel and viewed in their various geometric positions. Output data includes geometric positioning in orthogonal coordinates of each node point in the mechanism, velocity and acceleration of the node points, and internal loads and displacements of the node points and linkages. All analysis calculations take at most a few seconds to complete. Output data can be viewed at the scope and also printed at the discretion of the user.
Properties of Minor Ions in the Solar Wind and Implications for the Background Solar Wind Plasma
NASA Technical Reports Server (NTRS)
Wagner, William (Technical Monitor); Esser, Ruth
2004-01-01
The scope of the investigation is to extract information on the properties of the bulk solar wind from the minor ion observations that are provided by instruments on board NASA space craft and theoretical model studies. Ion charge states measured in situ in interplanetary space are formed in the inner coronal regions below 5 solar radii, hence they carry information on the properties of the solar wind plasma in that region. The plasma parameters that are important in the ion forming processes are the electron density, the electron temperature and the flow speeds of the individual ion species. In addition, if the electron distribution function deviates from a Maxwellian already in the inner corona, then the enhanced tail of that distribution function, also called halo, greatly effects the ion composition. This study is carried out using solar wind models, coronal observations, and ion calculations in conjunction with the in situ observations.
Ganandran, G S B; Mahlia, T M I; Ong, Hwai Chyuan; Rismanchi, B; Chong, W T
2014-01-01
This paper reports the result of an investigation on the potential energy saving of the lighting systems at selected buildings of the Universiti Tenaga Nasional. The scope of this project includes evaluation of the lighting system in the Library, Admin Building, College of Engineering, College of Information Technology, Apartments, and COE Food court of the university. The main objectives of this project are to design the proper retrofit scenario and to calculate the potential electricity saving, the payback period, and the potential environmental benefits. In this survey the policy for retrofitting the old lighting system with the new energy saving LEDs starts with 10% for the first year and continues constantly for 10 years until all the lighting systems have been replaced. The result of the life cycle analysis reveals that after four years, the selected buildings will bring profit for the investment.
[Crossing borders. The motivation of extreme sportsmen].
Opaschowski, H W
2005-08-01
In his article "Crossing borders -- the motivation of extreme sportsmen" the author gets systematically to the bottom of the question of why extreme sportsmen voluntarily take risks and endanger themselves. Within the scope of a representative sampling 217 extreme sportsmen -- from the fields of mountain biking, trekking and free climbing, canoyning, river rafting and deep sea diving, paragliding, parachuting, bungee jumping and survival training -- give information about their personal motives. What fascinates them? The attraction of risk? The search for sensation? Or the drop out of everyday life? And what comes afterwards? Does in the end the whole life become an extreme sport? Fact is: they live extremely, because they want to move beyond well-trodden paths. To escape the boredom of everyday life they are searching for the kick, the thrill, the no-limit experience. It's about calculated risk between altitude flight and deep sea adventure.
NASA Astrophysics Data System (ADS)
Baumgart, M.; Druml, N.; Consani, M.
2018-05-01
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
NASA Astrophysics Data System (ADS)
Ferland, G. J.; Chatzikos, M.; Guzmán, F.; Lykins, M. L.; van Hoof, P. A. M.; Williams, R. J. R.; Abel, N. P.; Badnell, N. R.; Keenan, F. P.; Porter, R. L.; Stancil, P. C.
2017-10-01
We describe the 2017 release of the spectral synthesis code Cloudy, summarizing the many improvements to the scope and accuracy of the physics which have been made since the previous release. Exporting the atomic data into external data files has enabled many new large datasets to be incorporated into the code. The use of the complete datasets is not realistic for most calculations, so we describe the limited subset of data used by default, which predicts significantly more lines than the previous release of Cloudy. This version is nevertheless faster than the previous release, as a result of code optimizations. We give examples of the accuracy limits using small models, and the performance requirements of large complete models. We summarize several advances in the H- and He-like iso-electronic sequences and use our complete collisional-radiative models to establish the densities where the coronal and local thermodynamic equilibrium approximations work.
Evaluation of Signal Regeneration Impact on the Power Efficiency of Long-Haul DWDM Systems
NASA Astrophysics Data System (ADS)
Pavlovs, D.; Bobrovs, V.; Parfjonovs, M.; Alsevska, A.; Ivanovs, G.
2017-10-01
Due to potential economic benefits and expected environmental impact, the power consumption issue in wired networks has become a major challenge. Furthermore, continuously increasing global Internet traffic demands high spectral efficiency values. As a result, the relationship between spectral efficiency and energy consumption of telecommunication networks has become a popular topic of academic research over the past years, where a critical parameter is power efficiency. The present research contains calculation results that can be used by optical network designers and operators as guidance for developing more power efficient communication networks if the planned system falls within the scope of this paper. The research results are presented as average aggregated traffic curves that provide more flexible data for the systems with different spectrum availability. Further investigations could be needed in order to evaluate the parameters under consideration taking into account particular spectral parameters, e.g., the entire C-band.
Efficient chemoenzymatic dynamic kinetic resolution of 1-heteroaryl ethanols.
Vallin, Karl S A; Wensbo Posaric, David; Hamersak, Zdenko; Svensson, Mats A; Minidis, Alexander B E
2009-12-18
The scope and limitation of the combined ruthenium-lipase induced dynamic kinetic resolution (DKR) through O-acetylation of racemic heteroaromatic secondary alcohols, i.e., 1-heteroaryl substituted ethanols, was investigated. After initial screening of reaction conditions, Candida antarctica lipase B (Novozyme 435, N435) together with 4-chloro-phenylacetate as acetyl-donor for kinetic resolution (KR), in conjunction with the ruthenium-based Shvo catalyst for substrate racemization in toluene at 80 degrees C, enabled DKR with high yields and stereoselectivity of various 1-heteroaryl ethanols, such as oxadiazoles, isoxazoles, 1H-pyrazole, or 1H-imidazole. In addition, DFT calculations based on a simplified catalyst complex model for the catalytic (de)hydrogenation step are in agreement with the previously reported outer sphere mechanism. These results support the further understanding of the mechanistic aspects behind the difference in reactivity of 1-heteroaryl substituted ethanols in comparison to reference substrates, as often referred to in the literature.
Building Shadow Detection from Ghost Imagery
NASA Astrophysics Data System (ADS)
Zhou, G.; Sha, J.; Yue, T.; Wang, Q.; Liu, X.; Huang, S.; Pan, Q.; Wei, J.
2018-05-01
Shadow is one of the basic features of remote sensing image, it expresses a lot of information of the object which is loss or interference, and the removal of shadow is always a difficult problem to remote sensing image processing. In this paper, it is mainly analyzes the characteristics and properties of shadows from the ghost image (traditional orthorectification). The DBM and the interior and exterior orientation elements of the image are used to calculate the zenith angle of sun. Then this paper combines the scope of the architectural shadows which has be determined by the zenith angle of sun with the region growing method to make the detection of architectural shadow areas. This method lays a solid foundation for the shadow of the repair from the ghost image later. It will greatly improve the accuracy of shadow detection from buildings and make it more conducive to solve the problem of urban large-scale aerial imagines.
Fluctuations of Wigner-type random matrices associated with symmetric spaces of class DIII and CI
NASA Astrophysics Data System (ADS)
Stolz, Michael
2018-02-01
Wigner-type randomizations of the tangent spaces of classical symmetric spaces can be thought of as ordinary Wigner matrices on which additional symmetries have been imposed. In particular, they fall within the scope of a framework, due to Schenker and Schulz-Baldes, for the study of fluctuations of Wigner matrices with additional dependencies among their entries. In this contribution, we complement the results of these authors by explicit calculations of the asymptotic covariances for symmetry classes DIII and CI and thus obtain explicit CLTs for these classes. On the technical level, the present work is an exercise in controlling the cumulative effect of systematically occurring sign factors in an involved sum of products by setting up a suitable combinatorial model for the summands. This aspect may be of independent interest. Research supported by Deutsche Forschungsgemeinschaft (DFG) via SFB 878.
Ganandran, G. S. B.; Mahlia, T. M. I.; Ong, Hwai Chyuan; Rismanchi, B.; Chong, W. T.
2014-01-01
This paper reports the result of an investigation on the potential energy saving of the lighting systems at selected buildings of the Universiti Tenaga Nasional. The scope of this project includes evaluation of the lighting system in the Library, Admin Building, College of Engineering, College of Information Technology, Apartments, and COE Food court of the university. The main objectives of this project are to design the proper retrofit scenario and to calculate the potential electricity saving, the payback period, and the potential environmental benefits. In this survey the policy for retrofitting the old lighting system with the new energy saving LEDs starts with 10% for the first year and continues constantly for 10 years until all the lighting systems have been replaced. The result of the life cycle analysis reveals that after four years, the selected buildings will bring profit for the investment. PMID:25133258
Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling
Ma, Junqing; Song, Aiguo
2013-01-01
Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144
NASA Astrophysics Data System (ADS)
Li, Juan; Zhang, Shijie; Shao, Di; Yang, Zhenqing; Zhang, Wansong
2018-03-01
Auxiliary acceptor groups play a crucial role in D-A-π-A structured organic dyes. In this paper, we designed three D-A-π-A structured organic molecules based on the prototype dye QT-1, named ME18-ME20, and further investigated their electronic and optical properties with density functional theory (DFT) and time-dependent DFT (TDDFT). The calculated results indicate that the scope and intensity of dyes' absorption spectra have some outstanding changes by inserting auxiliary groups. ME20 has not only 152 nm redshifts to long wave orientation, but also 78% increased oscillator strength compared to QT-1, and its absorption spectrum broadens region even up to 1400 nm. Then, we studied the reason that the effect of the introduced different auxiliary acceptor groups in these dyes through their ground states geometries and energy levels, electron transfer and recombination rate.
Biofilms of vaginal Lactobacillus in vitro test.
Wei, Xiao-Yu; Zhang, Rui; Xiao, Bing-Bing; Liao, Qin-Ping
2017-01-01
This paper focuses on biofilms of Lactobacillus spp. - a type of normal flora isolated from healthy human vaginas of women of childbearing age; thereupon, it broadens the research scope of investigation of vaginal normal flora. The static slide culture method was adopted to foster biofilms, marked by specific fluorescence staining. Laser scanning confocal and scanning electron microscopy were used to observe the microstructure of the biofilms. Photographs taken from the microstructure were analysed to calculate the density of the biofilms. The body of Lactobacillus spp., though red, turned yellow when interacting with the green extracellular polysaccharides. The structure of the biofilm and aquaporin within the biofilm were imaged. Lactobacillus density increases over time. This study provides convincing evidence that Lactobacillus can form biofilms and grow over time in vitro. This finding establishes an important and necessary condition for selecting proper strains for the pharmaceutics of vaginal ecology.
Transmitter diversity verification on ARTEMIS geostationary satellite
NASA Astrophysics Data System (ADS)
Mata Calvo, Ramon; Becker, Peter; Giggenbach, Dirk; Moll, Florian; Schwarzer, Malte; Hinz, Martin; Sodnik, Zoran
2014-03-01
Optical feeder links will become the extension of the terrestrial fiber communications towards space, increasing data throughput in satellite communications by overcoming the spectrum limitations of classical RF-links. The geostationary telecommunication satellite Alphasat and the satellites forming the EDRS-system will become the next generation for high-speed data-relay services. The ESA satellite ARTEMIS, precursor for geostationary orbit (GEO) optical terminals, is still a privileged experiment platform to characterize the turbulent channel and investigate the challenges of free-space optical communication to GEO. In this framework, two measurement campaigns were conducted with the scope of verifying the benefits of transmitter diversity in the uplink. To evaluate this mitigation technique, intensity measurements were carried out at both ends of the link. The scintillation parameter is calculated and compared to theory and, additionally, the Fried Parameter is estimated by using a focus camera to monitor the turbulence strength.
Catalytic mechanism of phenylacetone monooxygenases for non-native linear substrates.
Carvalho, Alexandra T P; Dourado, Daniel F A R; Skvortsov, Timofey; de Abreu, Miguel; Ferguson, Lyndsey J; Quinn, Derek J; Moody, Thomas S; Huang, Meilan
2017-10-11
Phenylacetone monooxygenase (PAMO) is the most stable and thermo-tolerant member of the Baeyer-Villiger monooxygenase family, and therefore it is an ideal candidate for the synthesis of industrially relevant compounds. However, its limited substrate scope has largely limited its industrial applications. In the present work, we provide, for the first time, the catalytic mechanism of PAMO for the native substrate phenylacetone as well as for a linear non-native substrate 2-octanone, using molecular dynamics simulations, quantum mechanics and quantum mechanics/molecular mechanics calculations. We provide a theoretical basis for the preference of the enzyme for the native aromatic substrate over non-native linear substrates. Our study provides fundamental atomic-level insights that can be employed in the rational engineering of PAMO for wide applications in industrial biocatalysis, in particular, in the biotransformation of long-chain aliphatic oils into potential biodiesels.
Techno-economic assessment of novel vanadium redox flow batteries with large-area cells
NASA Astrophysics Data System (ADS)
Minke, Christine; Kunz, Ulrich; Turek, Thomas
2017-09-01
The vanadium redox flow battery (VRFB) is a promising electrochemical storage system for stationary megawatt-class applications. The currently limited cell area determined by the bipolar plate (BPP) could be enlarged significantly with a novel extruded large-area plate. For the first time a techno-economic assessment of VRFB in a power range of 1 MW-20 MW and energy capacities of up to 160 MWh is presented on the basis of the production cost model of large-area BPP. The economic model is based on the configuration of a 250 kW stack and the overall system including stacks, power electronics, electrolyte and auxiliaries. Final results include a simple function for the calculation of system costs within the above described scope. In addition, the impact of cost reduction potentials for key components (membrane, electrode, BPP, vanadium electrolyte) on stack and system costs is quantified and validated.
Aura, Ossi; Ahonen, Guy; Ilmarinen, Juhani
2010-12-01
To examine the scope of strategic wellness management (SWM) in Finland. To measure management of wellness a strategic wellness management index (SWMI) was developed. On the basis of the developed SWM model an Internet questionnaire was conducted for randomly selected employers representing seven business areas and three size categories. Corporate activities and SWMI for each employer and for business area and size groups were calculated. Results highlighted relatively good activity in strategic wellness (SW) processes and fairly low level of SWM procedures. The average values (± SD) of SWMI were 53.6 ± 12.3 for large, 42.8 ± 11.7 for medium-size, and 32.8 ± 12.1 for small companies. SWMI can be a positive new, strong concept to measure SW processes and thus improve both the well-being of the employees and the productivity of the enterprise.
Thermal Bridge Effect of Aerated Concrete Block Wall in Cold Regions
NASA Astrophysics Data System (ADS)
Li, Baochang; Guo, Lirong; Li, Yubao; Zhang, Tiantian; Tan, Yufei
2018-01-01
As a self-insulating building material which can meet the 65 percent energy-efficiency requirements in cold region of China, aerated concrete blocks often go moldy, frost heaving, or cause plaster layer hollowing at thermal bridge parts in the extremely cold regions due to the restrictions of environmental climate and construction technique. L-shaped part and T-shaped part of aerated concrete walls are the most easily influenced parts by thermal bridge effect. In this paper, a field test is performed to investigate the scope of the thermal bridge effect. Moreover, a heat transfer calculation model for L-shaped wall and T-shaped wall is developed. According to the simulation results, the temperature fields of the thermal bridge affected regions are simulated and analyzed. The research outputs can provide theoretical basis for the application of aerated concrete wall in extremely cold regions.
Bai, Da-Chang; Yu, Fei-Le; Wang, Wan-Ying; Chen, Di; Li, Hao; Liu, Qing-Rong; Ding, Chang-Hua; Chen, Bo; Hou, Xue-Long
2016-01-01
The palladium-catalysed allylic substitution reaction is one of the most important reactions in transition-metal catalysis and has been well-studied in the past decades. Most of the reactions proceed through an outer-sphere mechanism, affording linear products when monosubstituted allyl reagents are used. Here, we report an efficient Palladium-catalysed protocol for reactions of β-substituted ketones with monosubstituted allyl substrates, simply by using N-heterocyclic carbene as ligand, leading to branched products with up to three contiguous stereocentres in a (syn, anti)-mode with excellent regio and diastereoselectivities. The scope of the protocol in organic synthesis has been examined preliminarily. Mechanistic studies by both experiments and density functional theory (DFT) calculations reveal that the reaction proceeds via an inner-sphere mechanism—nucleophilic attack of enolate oxygen on Palladium followed by C–C bond-forming [3,3']-reductive elimination. PMID:27283477
From metadynamics to dynamics.
Tiwary, Pratyush; Parrinello, Michele
2013-12-06
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Parrinello, Michele
2013-12-01
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
NASA Astrophysics Data System (ADS)
Maslyanchuk, O. L.; Solovan, M. M.; Brus, V. V.; Kulchynsky, V. V.; Maryanchuk, P. D.; Fodchuk, I. M.; Gnatyuk, V. A.; Aoki, T.; Potiriadis, C.; Kaissas, Y.
2017-05-01
The charge transport mechanism and spectrometric properties of the X-ray and γ-ray detectors, fabricated by the deposition of molybdenum oxide thin films onto semi-insulating p-CdTe crystals were studied. The current transport processes in the Mo-MoOx/p-CdTe/MoOx-Mo structure are well described in the scope of the carrier's generation in the space-charge region and the space-charge-limited current models. The lifetime of charge carriers, the energy of hole traps, and the density of discrete trapping centers were determined from the comparison of the experimental data and calculations. Spectrometric properties of Mo-MoOx/p-CdTe/MoOx-Mo structures were also investigated. It is shown that the investigated heterojunctions have demonstrated promising characteristics for practical application in X-ray and γ-ray detector fabrication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Da -Chang; Yu, Fei -Le; Wang, Wan -Ying
The palladium-catalysed allylic substitution reaction is one of the most important reactions in transition-metal catalysis and has been well-studied in the past decades. Most of the reactions proceed through an outer-sphere mechanism, affording linear products when monosubstituted allyl reagents are used. Here, we report an efficient Palladium-catalysed protocol for reactions of beta-substituted ketones with monosubstituted allyl substrates, simply by using N-heterocyclic carbene as ligand, leading to branched products with up to three contiguous stereocentres in a ( syn, anti)-mode with excellent regio and diastereoselectivities. The scope of the protocol in organic synthesis has been examined preliminarily. As a result, mechanisticmore » studies by both experiments and density functional theory ( DFT) calculations reveal that the reaction proceeds via an inner-sphere mechanism-nucleophilic attack of enolate oxygen on Palladium followed by C-C bond-forming [3,3']-reductive elimination.« less
Bai, Da -Chang; Yu, Fei -Le; Wang, Wan -Ying; ...
2016-06-10
The palladium-catalysed allylic substitution reaction is one of the most important reactions in transition-metal catalysis and has been well-studied in the past decades. Most of the reactions proceed through an outer-sphere mechanism, affording linear products when monosubstituted allyl reagents are used. Here, we report an efficient Palladium-catalysed protocol for reactions of beta-substituted ketones with monosubstituted allyl substrates, simply by using N-heterocyclic carbene as ligand, leading to branched products with up to three contiguous stereocentres in a ( syn, anti)-mode with excellent regio and diastereoselectivities. The scope of the protocol in organic synthesis has been examined preliminarily. As a result, mechanisticmore » studies by both experiments and density functional theory ( DFT) calculations reveal that the reaction proceeds via an inner-sphere mechanism-nucleophilic attack of enolate oxygen on Palladium followed by C-C bond-forming [3,3']-reductive elimination.« less
Integrated reflector antenna design and analysis
NASA Technical Reports Server (NTRS)
Zimmerman, M. L.; Lee, S. W.; Ni, S.; Christensen, M.; Wang, Y. M.
1993-01-01
Reflector antenna design is a mature field and most aspects were studied. However, of that most previous work is distinguished by the fact that it is narrow in scope, analyzing only a particular problem under certain conditions. Methods of analysis of this type are not useful for working on real-life problems since they can not handle the many and various types of perturbations of basic antenna design. The idea of an integrated design and analysis is proposed. By broadening the scope of the analysis, it becomes possible to deal with the intricacies attendant with modem reflector antenna design problems. The concept of integrated reflector antenna design is put forward. A number of electromagnetic problems related to reflector antenna design are investigated. Some of these show how tools for reflector antenna design are created. In particular, a method for estimating spillover loss for open-ended waveguide feeds is examined. The problem of calculating and optimizing beam efficiency (an important figure of merit in radiometry applications) is also solved. Other chapters deal with applications of this general analysis. The wide angle scan abilities of reflector antennas is examined and a design is proposed for the ATDRSS triband reflector antenna. The development of a general phased-array pattern computation program is discussed and how the concept of integrated design can be extended to other types of antennas is shown. The conclusions are contained in the final chapter.
40 CFR 1065.640 - Flow meter calibration calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Flow meter calibration calculations... POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.640 Flow meter calibration calculations. This section describes the calculations for calibrating various flow meters. After...
Moulton, Haley; Tosteson, Tor D; Zhao, Wenyan; Pearson, Loretta; Mycek, Kristina; Scherer, Emily; Weinstein, James N; Pearson, Adam; Abdu, William; Schwarz, Susan; Kelly, Michael; McGuire, Kevin; Milam, Alden; Lurie, Jon D
2018-06-05
Prospective evaluation of an informational web-based calculator for communicating estimates of personalized treatment outcomes. To evaluate the usability, effectiveness in communicating benefits and risks, and impact on decision quality of a calculator tool for patients with intervertebral disc herniations, spinal stenosis, and degenerative spondylolisthesis who are deciding between surgical and non-surgical treatments. The decision to have back surgery is preference-sensitive and warrants shared decision-making. However, more patient-specific, individualized tools for presenting clinical evidence on treatment outcomes are needed. Using Spine Patient Outcomes Research Trial (SPORT) data, prediction models were designed and integrated into a web-based calculator tool: http://spinesurgerycalc.dartmouth.edu/calc/. Consumer Reports subscribers with back-related pain were invited to use the calculator via email, and patient participants were recruited to use the calculator in a prospective manner following an initial appointment at participating spine centers. Participants completed questionnaires before and after using the calculator. We randomly assigned previously validated questions that tested knowledge about the treatment options to be asked either before or after viewing the calculator. 1,256 Consumer Reports subscribers and 68 patient participants completed the calculator and questionnaires. Knowledge scores were higher in the post-calculator group compared to the pre-calculator group, indicating that calculator usage successfully informed users. Decisional conflict was lower when measured following calculator use, suggesting the calculator was beneficial in the decision-making process. Participants generally found the tool helpful and easy to use. While the calculator is not a comprehensive decision aid, it does focus on communicating individualized risks and benefits for treatment options. Moreover, it appears to be helpful in achieving the goals of more traditional shared decision-making tools. It not only improved knowledge scores but also improved other aspects of decision quality.
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
EuroFIR Guideline on calculation of nutrient content of foods for food business operators.
Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul
2018-01-01
This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Learning with Calculators: Doing More with Less
ERIC Educational Resources Information Center
Kissane, Barry
2017-01-01
It seems that calculators continue to be misunderstood as devices solely for calculation, although the likely contributions to learning mathematics with modern calculators arise from other characteristics. A four-part model to understand the educational significance of calculators underpins this paper. Each of the four components (representation,…
NASA Astrophysics Data System (ADS)
Bunge, H.; Hagelberg, C.; Travis, B.
2002-12-01
EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.
Clark, T D; Ryan, T; Ingram, B A; Woakes, A J; Butler, P J; Frappell, P B
2005-01-01
Several previous reports, often from studies utilising heavily instrumented animals, have indicated that for teleosts, the increase in cardiac output (Vb) during exercise is mainly the result of an increase in cardiac stroke volume (V(S)) rather than in heart rate (fH). More recently, this contention has been questioned following studies on animals carrying less instrumentation, though the debate continues. In an attempt to shed more light on the situation, we examined the heart rates and oxygen consumption rates (Mo2; normalised to a mass of 1 kg, given as Mo2kg) of six Murray cod (Maccullochella peelii peelii; mean mass+/-SE = 1.81+/-0.14 kg) equipped with implanted fH and body temperature data loggers. Data were determined during exposure to varying temperatures and swimming speeds to encompass the majority of the biological scope of this species. An increase in body temperature (Tb) from 14 degrees C to 29 degrees C resulted in linear increases in Mo2kg (26.67-41.78 micromol min(-1) kg(-1)) and fH (22.3-60.8 beats min(-1)) during routine exercise but a decrease in the oxygen pulse (the amount of oxygen extracted per heartbeat; 1.28-0.74 micromol beat(-1) kg(-1)). During maximum exercise, the factorial increase in Mo2kg was calculated to be 3.7 at all temperatures and was the result of temperature-independent 2.2- and 1.7-fold increases in fH and oxygen pulse, respectively. The constant factorial increases in fH and oxygen pulse suggest that the cardiovascular variables of the Murray cod have temperature-independent maximum gains that contribute to maximal oxygen transport during exercise. At the expense of a larger factorial aerobic scope at an optimal temperature, as has been reported for species of salmon and trout, it is possible that the Murray cod has evolved a lower, but temperature-independent, factorial aerobic scope as an adaptation to the largely fluctuating and unpredictable thermal climate of southeastern Australia.
ENVISAT Land Surface Processes. Phase 2
NASA Technical Reports Server (NTRS)
vandenHurk, B. J. J. M.; Su, Z.; Verhoef, W.; Menenti, M.; Li, Z.-L.; Wan, Z.; Moene, A. F.; Roerink, G.; Jia, I.
2002-01-01
This is a progress report of the 2nd phase of the project ENVISAT- Land Surface Processes, which has a 3-year scope. In this project, preparative research is carried out aiming at the retrieval of land surface characteristics from the ENVISAT sensors MERIS and AATSR, for assimilation into a system for Numerical Weather Prediction (NWP). Where in the 1st phase a number of first shot experiments were carried out (aiming at gaining experience with the retrievals and data assimilation procedures), the current 2nd phase has put more emphasis on the assessment and improvement of the quality of the retrieved products. The forthcoming phase will be devoted mainly to the data assimilation experiments and the assessment of the added value of the future ENVISAT products for NWP forecast skill. Referring to the retrieval of albedo, leaf area index and atmospheric corrections, preliminary radiative transfer calculations have been carried out that should enable the retrieval of these parameters once AATSR and MERIS data become available. However, much of this work is still to be carried out. An essential part of work in this area is the design and implementation of software that enables an efficient use of MODTRAN(sub 4) radiative transfer code, and during the current project phase familiarization with these new components has been achieved. Significant progress has been made with the retrieval of component temperatures from directional ATSR-images, and the calculation of surface turbulent heat fluxes from these data. The impact of vegetation cover on the retrieved component temperatures appears manageable, and preliminary comparison of foliage temperature to air temperatures were encouraging. The calculation of surface fluxes using the SEBI concept,which includes a detailed model of the surface roughness ratio, appeared to give results that were in reasonable agreement with local measurements with scintillometer devices. The specification of the atmospheric boundary conditions appears a crucial component, and the use of first guess estimates from the RACMO models partially explains the success. Earlier data assimilation experiments with directional surface temperatures have been analysed a bit further and were also compared to results obtained from directly modeling the surface roughness ratio. Results between these calculations and the data assimilation results appeared well comparable, but a full test in which the surface roughness model is allowed to play a free role during the data assimilation process has yet to be carried out. A considerable number of tasks that have yet to be carried out during Phase 3 has been formulated.
WE-B-207-00: CT Lung Cancer Screening Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.
2015-06-15
The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Arthur
The Monitored Geologic Repository (MGR) Waste Package Department of the Civilian Radioactive Waste Management System Management & Operating contractor (CRWMS M&O) performed calculations to provide input for disposal of spent nuclear fuel (SNF) from the Shippingport Light Water Breeder Reactor (LWBR) (Ref. 1). The Shippingport LWBR SNF has been considered for disposal at the potential Yucca Mountain site. Because of the high content of fissile material in the SNF, the waste package (WP) design requires special consideration of the amount and placement of neutron absorbers and the possible loss of absorbers and SNF materials over geologic time. For some WPs,more » the outer shell corrosion-resistant material (CRM) and the corrosion-allowance inner shell may breach (Refs. 2 and 3), allowing the influx of water. Water in the WP will moderate neutrons, increasing the likelihood of a criticality event within the WP; and the water may, in time, gradually leach the fissile components and neutron absorbers from the WP, further affecting the neutronics of the system. This study presents calculations of the long-term geochemical behavior of WPs containing a Shippingport LWBR SNF seed assembly, and high-level waste (HLW) glass canisters arranged according to the codisposal concept (Ref. 4). The specific study objectives were to determine: (1) The extent to which criticality control material, suggested for this WP design, will remain in the WP after corrosion/dissolution of the initial WP configuration (such that it can be effective in preventing criticality); (2) The extent to which fissile uranium and fertile thorium will be carried out of the degraded WP by infiltrating water (such that internal criticality is no longer possible, but the possibility of external criticality may be enhanced); and (3) The nominal chemical composition for the criticality evaluations of the WP design, and to suggest the range of parametric variations for additional evaluations. The scope of this calculation, the chemical compositions (and subsequent criticality evaluations), of the simulations are limited to time periods up to 3.17 x 10{sup 5} years. This longer time frame is closer to the one million year time horizon recently recommended by the National Academy of Sciences to the Environmental Protection Agency for performance assessment related to a nuclear repository (Ref. 5). However, it is important to note that after 100,000 years, most of the materials of interest (fissile and absorber materials) will have either been removed from the WP, reached a steady state, or been transmuted. The calculation included elements with high neutron-absorption cross sections, notably gadolinium (Gd), as well as the fissile materials. The results of this analysis will be used to ensure that the type and amount of criticality control material used in the WP design will prevent criticality.« less
Calculators in the Mathematics Curriculum: Effects and Changes.
ERIC Educational Resources Information Center
Rabe, Rebecca Moore
The purpose of this paper was to determine the effects of calculators in mathematics classes and to assess proposed curriculum revisions related to calculators. Twenty-six calculator studies and other selected sources were reviewed and annotated. Major conclusions of the study include: (1) calculator use has produced significant gains in…
Alternative Fuels Data Center: Vehicle Cost Calculator
Cost Calculator to someone by E-mail Share Alternative Fuels Data Center: Vehicle Cost Calculator on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator on Google Bookmark Alternative Fuels Data Center: Vehicle
CTIO Infrared Imager Exposure Time Calculator Note: ISPI throughput values updated 12 March 2005 S/N ratio 10 Exposure Time 1 (seconds) Calculate S/N for specified Total Integration Time Calculate Total Integration Time to reach Desired S/N Submit Exposure Calculation Request [CTIO Home] [CTIO IR
Using Financial Calculators in a Business Mathematics Course.
ERIC Educational Resources Information Center
Heller, William H.; Taylor, Monty B.
2000-01-01
Discusses the authors' experiences with integrating financial calculators into a business mathematics course. Presents a brief overview of the operation of financial calculators, reviews some of the more common models, discusses how to use the equation solver utility on other calculators to emulate a financial calculator, and explores the…
42 CFR 102.80 - Calculation of medical benefits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Calculation of medical benefits. 102.80 Section 102... COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.80 Calculation of medical benefits. In calculating medical benefits, the Secretary will take into consideration all reasonable costs for those...
42 CFR 110.80 - Calculation of medical benefits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Calculation of medical benefits. 110.80 Section 110... COUNTERMEASURES INJURY COMPENSATION PROGRAM Calculation and Payment of Benefits § 110.80 Calculation of medical benefits. In calculating medical benefits, the Secretary will take into consideration all reasonable costs...
42 CFR 110.82 - Calculation of death benefits.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Calculation of death benefits. 110.82 Section 110... COUNTERMEASURES INJURY COMPENSATION PROGRAM Calculation and Payment of Benefits § 110.82 Calculation of death... file a written selection to receive death benefits under the alternative calculation, as described in...
42 CFR 110.82 - Calculation of death benefits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Calculation of death benefits. 110.82 Section 110... COUNTERMEASURES INJURY COMPENSATION PROGRAM Calculation and Payment of Benefits § 110.82 Calculation of death... file a written selection to receive death benefits under the alternative calculation, as described in...
42 CFR 110.82 - Calculation of death benefits.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Calculation of death benefits. 110.82 Section 110... COUNTERMEASURES INJURY COMPENSATION PROGRAM Calculation and Payment of Benefits § 110.82 Calculation of death... file a written selection to receive death benefits under the alternative calculation, as described in...
42 CFR 110.82 - Calculation of death benefits.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Calculation of death benefits. 110.82 Section 110... COUNTERMEASURES INJURY COMPENSATION PROGRAM Calculation and Payment of Benefits § 110.82 Calculation of death... file a written selection to receive death benefits under the alternative calculation, as described in...
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
Separation behavior of boundary layers on three-dimensional wings
NASA Technical Reports Server (NTRS)
Stock, H. W.
1981-01-01
An inverse boundary layer procedure for calculating separated, turbulent boundary layers at infinitely long, crabbing wing was developed. The procedure was developed for calculating three dimensional, incompressible turbulent boundary layers was expanded to adiabatic, compressible flows. Example calculations with transsonic wings were made including viscose effects. In this case an approximated calculation method described for areas of separated, turbulent boundary layers, permitting calculation of this displacement thickness. The laminar boundary layer development was calculated with inclined ellipsoids.
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
26 CFR 1.414(q)-1T - Highly compensated employee (temporary).
Code of Federal Regulations, 2014 CFR
2014-04-01
... look-back year calculation and/or determination year calculation for such determination year. See A-14 for rules relating to the periods for which the look-back year calculation and determination year calculation are to be made. (1) Look-back year calculation. (i) 5-percent owner. The employee is a 5-percent...
26 CFR 1.414(q)-1T - Highly compensated employee (temporary).
Code of Federal Regulations, 2012 CFR
2012-04-01
... look-back year calculation and/or determination year calculation for such determination year. See A-14 for rules relating to the periods for which the look-back year calculation and determination year calculation are to be made. (1) Look-back year calculation. (i) 5-percent owner. The employee is a 5-percent...
26 CFR 1.414(q)-1T - Highly compensated employee (temporary).
Code of Federal Regulations, 2011 CFR
2011-04-01
... look-back year calculation and/or determination year calculation for such determination year. See A-14 for rules relating to the periods for which the look-back year calculation and determination year calculation are to be made. (1) Look-back year calculation. (i) 5-percent owner. The employee is a 5-percent...
26 CFR 1.414(q)-1T - Highly compensated employee (temporary).
Code of Federal Regulations, 2013 CFR
2013-04-01
... look-back year calculation and/or determination year calculation for such determination year. See A-14 for rules relating to the periods for which the look-back year calculation and determination year calculation are to be made. (1) Look-back year calculation. (i) 5-percent owner. The employee is a 5-percent...
Obliged to Calculate: "My School", Markets, and Equipping Parents for Calculativeness
ERIC Educational Resources Information Center
Gobby, Brad
2016-01-01
This paper argues neoliberal programs of government in education are equipping parents for calculativeness. Regimes of testing and the publication of these results and other organizational data are contributing to a public economy of numbers that increasingly oblige citizens to calculate. Using the notions of calculative and market devices, this…
77 FR 59683 - Northern Trust Investments, Inc., et al.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... handled. Such changes would not take effect until the Index Provider has given (a) the Calculation Agent... Provider will enter into an agreement (``Calculation Agent Agreement'') with a third party to act as ``Calculation Agent.'' The Calculation Agent will be solely responsible for the calculation and maintenance of...
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
One-electron oxidation of individual DNA bases and DNA base stacks.
Close, David M
2010-02-04
In calculations performed with DFT there is a tendency of the purine cation to be delocalized over several bases in the stack. Attempts have been made to see if methods other than DFT can be used to calculate localized cations in stacks of purines, and to relate the calculated hyperfine couplings with known experimental results. To calculate reliable hyperfine couplings it is necessary to have an adequate description of spin polarization which means that electron correlation must be treated properly. UMP2 theory has been shown to be unreliable in estimating spin densities due to overestimates of the doubles correction. Therefore attempts have been made to use quadratic configuration interaction (UQCISD) methods to treat electron correlation. Calculations on the individual DNA bases are presented to show that with UQCISD methods it is possible to calculate hyperfine couplings in good agreement with the experimental results. However these UQCISD calculations are far more time-consuming than DFT calculations. Calculations are then extended to two stacked guanine bases. Preliminary calculations with UMP2 or UQCISD theory on two stacked guanines lead to a cation localized on a single guanine base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Bartine, D.E.; Mynatt, F.R.
1976-01-01
Two-dimensional neutron and secondary gamma-ray transport calculations and cross-section sensitivity analyses have been performed to determine the effects of varying source heights and cross sections on calculated doses. The air-over-ground calculations demonstrate the existence of an optimal height of burst for a specific ground range and indicate under what conditions they are conservative with respect to infinite air calculations. The air-over-seawater calculations showed the importance of hydrogen and chlorine in gamma production. Additional sensitivity analyses indicated the importance of water in the ground, the amount of reduction in ground thickness for calculational purposes, and the effect of the degree ofmore » Legendre angular expansion of the scattering cross-sections (P/sub l/) on the calculated dose.« less
CALCULATIONAL TOOL FOR SKIN CONTAMINATION DOSE ESTIMATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
HILL, R.L.
2005-03-31
A spreadsheet calculational tool was developed to automate the calculations performed for estimating dose from skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Cardiovascular Disease Risk Score: Results from the Filipino-American Women Cardiovascular Study.
Ancheta, Irma B; Battie, Cynthia A; Volgman, Annabelle S; Ancheta, Christine V; Palaniappan, Latha
2017-02-01
Although cardiovascular disease (CVD) is a leading cause of morbidity and mortality of Filipino-Americans, conventional CVD risk calculators may not be accurate for this population. CVD risk scores of a group of Filipino-American women (FAW) were measured using the major risk calculators. Secondly, the sensitivity of the various calculators to obesity was determined. This is a cross-sectional descriptive study that enrolled 40-65-year-old FAW (n = 236), during a community-based health screening study. Ten-year CVD risk was calculated using the Framingham Risk Score (FRS), Reynolds Risk Score (RRS), and Atherosclerotic Cardiovascular Disease (ASCVD) calculators. The 30-year risk FRS and the lifetime ASCVD calculators were also determined. Levels of predicted CVD risk varied as a function of the calculator. The 10-year ASCVD calculator classified 12 % of participants with ≥10 % risk, but the 10-year FRS and RRS calculators classified all participants with ≤10 % risk. The 30-year "Hard" Lipid and BMI FRS calculators classified 32 and 43 % of participants with high (≥20 %) risk, respectively, while 95 % of participants were classified with ≥20 % risk by the lifetime ASCVD calculator. The percent of participants with elevated CVD risk increased as a function of waist circumference for most risk score calculators. Differences in risk score as a function of the risk score calculator indicate the need for outcome studies in this population. Increased waist circumference was associated with increased CVD risk scores underscoring the need for obesity control as a primary prevention of CVD in FAW.
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.; Beheim, Glenn
1995-01-01
The effective-index method and Marcatili's technique were utilized independently to calculate the electric field profile of a rib channel waveguide. Using the electric field profile calculated from each method, the theoretical coupling efficiency between a single-mode optical fiber and a rib waveguide was calculated using the overlap integral. Perfect alignment was assumed and the coupling efficiency calculated. The coupling efficiency calculation was then repeated for a range of transverse offsets.
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the Modal Calculation Program is presented. The modal Calculation Program calculates the amplitude and phase of modal structures by means of acoustic pressure measurements obtained from microphones placed at selected locations within the fan inlet duct. In addition, the Modal Calculation Program also calculates the first-order errors in the modal coefficients that are due to tolerances in microphone location coordinates and inaccuracies in the acoustic pressure measurements.
Park, Jong Min; Park, So-Yeon; Kim, Jung-In; Carlson, Joel; Kim, Jin Ho
2017-03-01
To investigate the effect of dose calculation grid on calculated dose-volumetric parameters for eye lenses and optic pathways. A total of 30 patients treated using the volumetric modulated arc therapy (VMAT) technique, were retrospectively selected. For each patient, dose distributions were calculated with calculation grids ranging from 1 to 5 mm at 1 mm intervals. Identical structures were used for VMAT planning. The changes in dose-volumetric parameters according to the size of the calculation grid were investigated. Compared to dose calculation with 1 mm grid, the maximum doses to the eye lens with calculation grids of 2, 3, 4 and 5 mm increased by 0.2 ± 0.2 Gy, 0.5 ± 0.5 Gy, 0.9 ± 0.8 Gy and 1.7 ± 1.5 Gy on average, respectively. The Spearman's correlation coefficient between dose gradients near structures vs. the differences between the calculated doses with 1 mm grid and those with 5 mm grid, were 0.380 (p < 0.001). For the accurate calculation of dose distributions, as well as efficiency, using a grid size of 2 mm appears to be the most appropriate choice.
Mouney, Meredith C; Townsend, Wendy M; Moore, George E
2012-12-01
To determine whether differences exist in the calculated intraocular lens (IOL) strengths of a population of adult horses and to assess the association between calculated IOL strength and horse height, body weight, and age, and between calculated IOL strength and corneal diameter. 28 clinically normal adult horses (56 eyes). Axial globe lengths and anterior chamber depths were measured ultrasonographically. Corneal curvatures were determined with a modified photokeratometer and brightness-mode ultrasonographic images. Data were used in the Binkhorst equation to calculate the predicted IOL strength for each eye. The calculated IOL strengths were compared with a repeated-measures ANOVA. Corneal curvature values (photokeratometer vs brightness-mode ultrasonographic images) were compared with a paired t test. Coefficients of determination were used to measure associations. Calculated IOL strengths (range, 15.4 to 30.1 diopters) differed significantly among horses. There was a significant difference in the corneal curvatures as determined via the 2 methods. Weak associations were found between calculated IOL strength and horse height and between calculated IOL strength and vertical corneal diameter. Calculated IOL strength differed significantly among horses. Because only weak associations were detected between calculated IOL strength and horse height and vertical corneal diameter, these factors would not serve as reliable indicators for selection of the IOL strength for a specific horse.
Levelized Cost of Energy Calculator | Energy Analysis | NREL
Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls
Neutron skyshine calculations with the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-10-01
Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.
RADON AND PROGENY SOURCED DOSE ASSESSMENT OF SPA EMPLOYEES IN BALNEOLOGICAL SITES.
Uzun, Sefa Kemal; Demiröz, Işık
2016-09-01
This study was conducted in the scope of IAEA project with the name 'Establishing a Systematic Radioactivity Survey and Total Effective Dose Assessment in Natural Balneological Sites' (TUR/9/018), at the Health Physics department of Sarayköy Nuclear Research and Training Center (SANAEM). The aim of this study is estimation of radon and progeny sourced effective dose for the people who are working at the spa facilities by measuring radon activity concentration (RAC) at the ambient air of indoor spa pools and dressing rooms. As it is known, the source of the radon gas is the radium content of the earth crust. Therefore, thermal waters coming from ground may contain dissolved radon and the radon can diffuse water to air. So the ambient air of spa pools can contain serious RAC that depends on a lot of parameters. In this regard, RAC measurements were executed at the 70 spa facilities in Turkey. The measurements were done with both active and passive methods at ambient air of spa pools and dressing rooms. Thus, active measurements were carried out by using the Alphaguard(®) with diffusion mode during half an hour, and passive measurements were carried out by using the humidity resistive CR-39 radon detectors during 2 months. Results show that RAC values at ambient air of spa pools varies between 13 Bq m(-3) and 10 kBq m(-3) Because long-term measurements are more reliable, if it is available, for dose calculations passive radon measurements (with CR-39 detectors) at ambient air of spa pools and dressing rooms were used, otherwise active measurement results were used. With the measurement by the conversion coefficients of ICRP 65 and occupational data of the employees has got from questionary forms, effective dose values were calculated. According to the calculations, spa employees are exposed to annual average dose between 0.05 and 29 mSv because of radon and progeny. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
Using models for the optimization of hydrologic monitoring
Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.
2011-01-01
Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The techniques briefly described earlier are described in detail in a U.S. Geological Survey Scientific Investigations Report available on the Internet (Fienen and others, 2010; http://pubs.usgs.gov/sir/2010/5159/). This fact sheet presents a synopsis of the techniques as applied to a synthetic model based on a model constructed using properties from the Lake Michigan Basin (Hoard, 2010).
The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data
NASA Astrophysics Data System (ADS)
Kozicki, Marek; Maras, Piotr
2015-01-01
The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.
Does Calculation or Word-Problem Instruction Provide A Stronger Route to Pre-Algebraic Knowledge?
Fuchs, Lynn S.; Powell, Sarah R.; Cirino, Paul T.; Schumacher, Robin F.; Marrin, Sarah; Hamlett, Carol L.; Fuchs, Douglas; Compton, Donald L.; Changas, Paul C.
2014-01-01
The focus of this study was connections among 3 aspects of mathematical cognition at 2nd grade: calculations, word problems, and pre-algebraic knowledge. We extended the literature, which is dominated by correlational work, by examining whether intervention conducted on calculations or word problems contributes to improved performance in the other domain and whether intervention in either or both domains contributes to pre-algebraic knowledge. Participants were 1102 children in 127 2nd-grade classrooms in 25 schools. Teachers were randomly assigned to 3 conditions: calculation intervention, word-problem intervention, and business-as-usual control. Intervention, which lasted 17 weeks, was designed to provide research-based linkages between arithmetic calculations or arithmetic word problems (depending on condition) to pre-algebraic knowledge. Multilevel modeling suggested calculation intervention improved calculation but not word-problem outcomes; word-problem intervention enhanced word-problem but not calculation outcomes; and word-problem intervention provided a stronger route than calculation intervention to pre-algebraic knowledge. PMID:25541565
... for Sobriety www.sossobriety.org 323–666–4295 SMART Recovery www.smartrecovery.org 440–951–5357 Women ... Policies Interactive Body Calculators Alcohol Calorie Calculator Alcohol Cost Calculator Alcohol BAC Calculator Alcohol Myths Getting Help ...
Airplane stability calculations with a card programmable pocket calculator
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1978-01-01
Programs are presented for calculating airplane stability characteristics with a card programmable pocket calculator. These calculations include eigenvalues of the characteristic equations of lateral and longitudinal motion as well as stability parameters such as the time to damp to one-half amplitude or the damping ratio. The effects of wind shear are included. Background information and the equations programmed are given. The programs are written for the International System of Units, the dimensional form of the stability derivatives, and stability axes. In addition to programs for stability calculations, an unusual and short program is included for the Euler transformation of coordinates used in airplane motions. The programs have been written for a Hewlett Packard HP-67 calculator. However, the use of this calculator does not constitute an endorsement of the product by the National Aeronautics and Space Administration.
The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method
NASA Astrophysics Data System (ADS)
Cui, S. T.; Cummings, P. T.; Cochran, H. D.
This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.
NASA Astrophysics Data System (ADS)
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram
2015-12-01
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-10-01
batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false How is the pay rate for COP calculated? 10..., AS AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate...
Ogata, Koji; Hatakeyama, Makoto; Nakamura, Shinichiro
2018-02-15
The octanol-water partition coefficient (log P ow ) is an important index for measuring solubility, membrane permeability, and bioavailability in the drug discovery field. In this paper, the log P ow values of 58 compounds were predicted by alchemical free energy calculation using molecular dynamics simulation. In free energy calculations, the atomic charges of the compounds are always fixed. However, they must be recalculated for each solvent. Therefore, three different sets of atomic charges were tested using quantum chemical calculations, taking into account vacuum, octanol, and water environments. The calculated atomic charges in the different environments do not necessarily influence the correlation between calculated and experimentally measured ∆ G water values. The largest correlation coefficient values of the solvation free energy in water and octanol were 0.93 and 0.90, respectively. On the other hand, the correlation coefficient of log P ow values calculated from free energies, the largest of which was 0.92, was sensitive to the combination of the solvation free energies calculated from the calculated atomic charges. These results reveal that the solvent assumed in the atomic charge calculation is an important factor determining the accuracy of predicted log P ow values.
Medication calculation skills of graduating nursing students in Finland.
Grandell-Niemi, H; Hupli, M; Leino-Kilpi, H
2001-01-01
The aim of this study was to describe the basic mathematical proficiency and the medication calculation skills of graduating nursing students in Finland. A further concern was with how students experienced the teaching of medication calculation. We wanted to find out whether these experiences were associated with various background factors and the students' medication calculation skills. In spring 1997 the population of graduating nursing students in Finland numbered around 1280; the figure for the whole year was 2640. A convenience sample of 204 students completed a questionnaire specially developed for this study. The instrument included structured questions, statements and a medication calculation test. The response rate was 88%. Data analysis was based on descriptive statistics. The students found it hard to learn mathematics and medication calculation skills. Those who evaluated their mathematical and medication calculation skills as sufficient successfully solved the problems included in the questionnaire. It was felt that the introductory course on medication calculation was uninteresting and poorly organised. Overall the students' mathematical skills were inadequate. One-fifth of the students failed to pass the medication calculation test. A positive correlation was shown between the student's grade in mathematics (Sixth Form College) and her skills in medication calculation.
Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G
2017-12-01
To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.
Wright, Kerri
2008-10-01
Student nurses need to develop and retain drug calculation skills in order accurately to calculate drug dosages in clinical practice. If student nurses are to qualify and be fit to practise accurate drug calculation skills, then educational strategies need to not only show that the skills of student nurses have improved but that these skills have been retained over a period of time. A quasi-experimental approach was used to test the effectiveness of a range of strategies in improving retention of drug calculation skills. The results from an IV additive drug calculation test were used to compare the drug calculation skills of student nurses between two groups of students who had received different approaches to teaching drug calculation skills. The sample group received specific teaching and learning strategies in relation to drug calculation skills and the second group received only lectures on drug calculation skills. All test results for students were anonymous. The results from the test for both groups were statistically analysed using the Mann Whitney test to ascertain whether the range of strategies improved the results for the IV additive test. The results were further analysed and compared to ascertain the types and numbers of errors made in each of the sample groups. The results showed that there is a highly significant difference between the two samples using a two-tailed test (U=39.5, p<0.001). The strategies implemented therefore did make a difference to the retention of drug calculation skills in the students in the intervention group. Further research is required into the retention of drug calculation skills by students and nurses, but there does appears to be evidence to suggest that sound teaching and learning strategies do result in better retention of drug calculation skills.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
Symbolic-Graphical Calculators: Teaching Tools for Mathematics.
ERIC Educational Resources Information Center
Dick, Thomas P.
1992-01-01
Explores the role that symbolic-graphical calculators can play in the current calls for reform in the mathematics curriculum. Discusses symbolic calculators and graphing calculators in relation to problem solving, computational skills, and mathematics instruction. (MDH)
Ezenduka, Charles C; Falleiros, Daniel Resende; Godman, Brian B
2017-09-01
Accurate information on the facility costs of treatment is essential to enhance decision making and funding for malaria control. The objective of this study was to estimate the costs of providing treatment for uncomplicated malaria through a public health facility in Nigeria. Hospital costs were estimated from a provider perspective, applying a standard costing procedure. Capital and recurrent expenditures were estimated using an ingredient approach combined with step-down methodology. Costs attributable to malaria treatment were calculated based on the proportion of malaria cases to total outpatient visits. The costs were calculated in local currency [Naira (N)] and converted to US dollars at the 2013 exchange rate. Total annual costs of N28.723 million (US$182,953.65) were spent by the facility on the treatment of uncomplicated malaria, at a rate of US$31.49 per case, representing approximately 25% of the hospital's total expenditure in the study year. Personnel accounted for over 82.5% of total expenditure, followed by antimalarial medicines at 6.6%. More than 45% of outpatients visits were for uncomplicated malaria. Changes in personnel costs, drug prices and malaria prevalence significantly impacted on the study results, indicating the need for improved efficiency in the use of hospital resources. Malaria treatment currently consumes a considerable amount of resources in the facility, driven mainly by personnel cost and a high proportion of malaria cases. There is scope for enhanced efficiency to prevent waste and reduce costs to the provider and ultimately the consumer.
Sampling for Air Chemical Emissions from the Life Sciences Laboratory II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, Marcel Y.; Lindberg, Michael J.
Sampling for air chemical emissions from the Life Science Laboratory II (LSL-II) ventilation stack was performed in an effort to determine potential exposure of maintenance staff to laboratory exhaust on the building roof. The concern about worker exposure was raised in December 2015 and several activities were performed to assist in estimating exposure concentrations. Data quality objectives were developed to determine the need for and scope and parameters of a sampling campaign to measure chemical emissions from research and development activities to the outside air. The activities provided data on temporal variation of air chemical concentrations and a basis formore » evaluating calculated emissions. Sampling for air chemical emissions was performed in the LSL-II ventilation stack over the 6-week period from July 26 to September 1, 2016. A total of 12 sampling events were carried out using 16 sample media. Resulting analysis provided concentration data on 49 analytes. All results were below occupational exposure limits and most results were below detection limits. When compared to calculated emissions, only 5 of the 49 chemicals had measured concentrations greater than predicted. This sampling effort will inform other study components to develop a more complete picture of a worker’s potential exposure from LSL-II rooftop activities. Mixing studies were conducted to inform spatial variation in concentrations at other rooftop locations and can be used in conjunction with these results to provide temporal variations in concentrations for estimating the potential exposure to workers working in and around the LSL-II stack.« less
Severe Weather Environments in Atmospheric Reanalyses
NASA Astrophysics Data System (ADS)
King, A. T.; Kennedy, A. D.
2017-12-01
Atmospheric reanalyses combine historical observation data using a fixed assimilation scheme to achieve a dynamically coherent representation of the atmosphere. How well these reanalyses represent severe weather environments via proxies is poorly defined. To quantify the performance of reanalyses, a database of proximity soundings near severe storms from the Rapid Update Cycle 2 (RUC-2) model will be compared to a suite of reanalyses including: North American Reanalysis (NARR), European Interim Reanalysis (ERA-Interim), 2nd Modern-Era Retrospective Reanalysis for Research and Applications (MERRA-2), Japanese 55-year Reanalysis (JRA-55), 20th Century Reanalysis (20CR), and Climate Forecast System Reanalysis (CFSR). A variety of severe weather parameters will be calculated from these soundings including: convective available potential energy (CAPE), storm relative helicity (SRH), supercell composite parameter (SCP), and significant tornado parameter (STP). These soundings will be generated using the SHARPpy python module, which is an open source tool used to calculate severe weather parameters. Preliminary results indicate that the NARR and JRA55 are significantly more skilled at producing accurate severe weather environments than the other reanalyses. The primary difference between these two reanalyses and the remaining reanalyses is a significant negative bias for thermodynamic parameters. To facilitate climatological studies, the scope of work will be expanded to compute these parameters for the entire domain and duration of select renalyses. Preliminary results from this effort will be presented and compared to observations at select locations. This dataset will be made pubically available to the larger scientific community, and details of this product will be provided.
Euler solutions to nonlinear acoustics of non-lifting rotor blades
NASA Technical Reports Server (NTRS)
Baeder, J. D.
1991-01-01
For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.
Euler solutions to nonlinear acoustics of non-lifting hovering rotor blades
NASA Technical Reports Server (NTRS)
Baeder, J. D.
1991-01-01
For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.
Wall, Martin; Casswell, Sally; Callinan, Sarah; Chaiyasong, Surasak; Viet Cuong, Pham; Gray-Phillip, Gaile; Parry, Charles
2017-11-22
Taxation is increasingly being used as an effective means of influencing behaviour in relation to harmful products. In this paper we use data from six participating countries of the International Alcohol Control Study to examine and evaluate their comparative prices and tax regimes. We calculate taxes and prices for three high-income and three middle-income countries. The data are drawn from the International Alcohol Control survey and from the Alcohol Environment Protocol. Tax systems are described and then the rates of tax on key products presented. Comparisons are made using the Purchasing Power Parity rates. The price and purchase data from each country's International Alcohol Control survey is then used to calculate the mean percentage of retail price paid in tax weighted by actual consumption. Both ad valorem and specific per unit of alcohol taxation systems are represented among the six countries. The prices differ widely between countries even though presented in terms of Purchasing Power Parity. The percentage of tax in the final price also varies widely but is much lower than the 75% set by the World Health Organization as a goal for tobacco tax. There is considerable variation in tax systems and prices across countries. There is scope to increase taxation and this analysis provides comparable data, including the percentage of tax in final price, from some middle and high-income countries for consideration in policy discussion. © 2017 The Authors Drug and Alcohol Review published by John Wiley & Sons Australia, Ltd on behalf of Australasian Professional Society on Alcohol and other Drugs.
Approaches to Children’s Exposure Assessment: Case Study with Diethylhexylphthalate (DEHP)
Ginsberg, Gary; Ginsberg, Justine; Foos, Brenda
2016-01-01
Children’s exposure assessment is a key input into epidemiology studies, risk assessment and source apportionment. The goals of this article are to describe a methodology for children’s exposure assessment that can be used for these purposes and to apply the methodology to source apportionment for the case study chemical, diethylhexylphthalate (DEHP). A key feature is the comparison of total (aggregate) exposure calculated via a pathways approach to that derived from a biomonitoring approach. The 4-step methodology and its results for DEHP are: (1) Prioritization of life stages and exposure pathways, with pregnancy, breast-fed infants, and toddlers the focus of the case study and pathways selected that are relevant to these groups; (2) Estimation of pathway-specific exposures by life stage wherein diet was found to be the largest contributor for pregnant women, breast milk and mouthing behavior for the nursing infant and diet, house dust, and mouthing for toddlers; (3) Comparison of aggregate exposure by pathways vs biomonitoring-based approaches wherein good concordance was found for toddlers and pregnant women providing confidence in the exposure assessment; (4) Source apportionment in which DEHP presence in foods, children’s products, consumer products and the built environment are discussed with respect to early life mouthing, house dust and dietary exposure. A potential fifth step of the method involves the calculation of exposure doses for risk assessment which is described but outside the scope for the current case study. In summary, the methodology has been used to synthesize the available information to identify key sources of early life exposure to DEHP. PMID:27376320
Cappon, Giacomo; Marturano, Francesca; Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni
2018-05-01
The standard formula (SF) used in bolus calculators (BCs) determines meal insulin bolus using "static" measurement of blood glucose concentration (BG) obtained by self-monitoring of blood glucose (SMBG) fingerprick device. Some methods have been proposed to improve efficacy of SF using "dynamic" information provided by continuous glucose monitoring (CGM), and, in particular, glucose rate of change (ROC). This article compares, in silico and in an ideal framework limiting the exposition to possibly confounding factors (such as CGM noise), the performance of three popular techniques devised for such a scope, that is, the methods of Buckingham et al (BU), Scheiner (SC), and Pettus and Edelman (PE). Using the UVa/Padova Type 1 diabetes simulator we generated data of 100 virtual subjects in noise-free, single-meal scenarios having different preprandial BG and ROC values. Meal insulin bolus was computed using SF, BU, SC, and PE. Performance was assessed with the blood glucose risk index (BGRI) on the 9 hours after meal. On average, BU, SC, and PE improve BGRI compared to SF. When BG is rapidly decreasing, PE obtains the best performance. In the other ROC scenarios, none of the considered methods prevails in all the preprandial BG conditions tested. Our study showed that, at least in the considered ideal framework, none of the methods to correct SF according to ROC is globally better than the others. Critical analysis of the results also suggests that further investigations are needed to develop more effective formulas to account for ROC information in BCs.
NASA Astrophysics Data System (ADS)
Rusek, Janusz; Kocot, Wojciech
2017-10-01
The article presents the method for assessing dynamic resistance of the existing industrial portal frame building structures subjected to mining tremors. The study was performed on two industrial halls of a reinforced concrete structure and a steel structure. In order to determine the dynamic resistances of these objects, static and dynamic numerical analysis in the FEA environment was carried out. The scope of numerical calculations was adapted to the guidelines contained in the former and current design standards. This allowed to formulate the criteria, on the basis of which the maximum permissible value of the horizontal ground acceleration was obtained, constituting resistance of the analyzed objects. The permissible range of structural behaviour was determined by comparing the effects of load combinations adopted at the design stage with a seismic combination recognized in Eurocode 8. The response spectrum method was used in the field of dynamic analysis, taking into account the guidelines contained in Eurocode 8 and the guidelines of National. Finally, in accordance with the established procedure, calculations were carried out and the results for the two model portal frame buildings of reinforced concrete and steel structures were presented. The results allowed for the comparison of the dynamic resistance of two different types of material and design, and a sensitivity analysis with respect to their constituent bearing elements. The conclusions drawn from these analyses helped to formulate the thesis for the next stage of the research, in which it is expected to analyze a greater number of objects using a parametric approach, in relation to the geometry and material properties.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
Estimating cross-slope exchange from drifter tracks and from glider sections
NASA Astrophysics Data System (ADS)
Huthnance, John M.
2017-04-01
In areas of complex topography, it can be difficult to define "along-slope" or "cross-slope" direction, yet transport estimates are sensitive to these definitions, especially as along-slope flow is favoured by geostrophy. However, if drifter positions and hence underlying water depths are recorded regularly, we know where and when depth contours are crossed by the drifters, and hence by the water assuming that the drifters follow the water. An approach is discussed for deriving statistics of contour-crossing speed, via depth changes experienced by the drifters and an effective slope. The transport equation for (e.g.) salinity S can be reduced to an explicit equation for effective diffusivity K if we assume steady along-slope flow with known total transport Q, a salinity maximum at its "core" and effective diffusion to less saline waters to either side. Salinity gradients along the flow and to either side are needed to calculate K. Gliders provide a means of measuring salinity gradients in this context. Measurements at the continental shelf edge south-west of England and west of Scotland illustrate the calculation. Both approaches give overall rather than process-related estimates. There is limited scope for process discrimination according to (i) how often drifter locations are recorded and (ii) the time-intervals into which estimates are "binned". (i) Frequent recording may record more crossings owing to processes of short time scale, albeit these are less significant for slowly-evolving water contents. (ii) Sufficient samples for statistically significant estimates of exchange entail "bins" spanning some weeks or months for typically-limited numbers of drifters or gliders.
Scocco, Paola; Brusaferro, Andrea; Catorci, Andrea
2012-07-01
Although the Geographical Information System (GIS), which integrates computerized drawing computer assisted design (CAD) and relational databases (data base management system (DBMS)), is best known for applications in geographical and planning cartography, it can also use many kinds of information concerning the territory. A multidisciplinary project was initiated since 5 years a multidisciplinary study was initiated to use GIS to integrate environmental and ecological data with findings on animal health, ethology, and anatomy. This study is chiefly aimed at comparing two different methods for measuring the absorptive surface of rumen papillae. To this scope, 21 female sheep (Ovis aries) on different alimentary regimes (e.g., milk and forage mixed diet, early herbaceous diet, dry hay diet, and fresh hay diet at the maximum of pasture flowering and at the maximum of pasture dryness) were used; after slaughtering, 20 papillae were randomly removed from each sample collected from four indicator regions of rumen wall, placed near a metric reference and digitally photographed. The images were developed with the ArcGIS™ software to calculate the area of rumen papillae by means of GIS and to measure their mid-level width and length to calculate the papillae area as previously performed with a different method. Spatial measurements were analyzed using univariate and multivariate methods. This work demonstrates that the GIS methodology can be efficiently used for measuring the absorptive surface of rumen papillae. In addition, GIS demonstrated to be a rapid, precise, and objective tool when compared with previously used method. Copyright © 2012 Wiley Periodicals, Inc.
Origin of the pressure-dependent Tc valley in superconducting simple cubic phosphorus
NASA Astrophysics Data System (ADS)
Wu, Xianxin; Jeschke, Harald O.; Di Sante, Domenico; von Rohr, Fabian O.; Cava, Robert J.; Thomale, Ronny
2018-03-01
Motivated by recent experiments, we investigate the pressure-dependent electronic structure and electron-phonon (e-ph) coupling for simple cubic phosphorus by performing first-principles calculations within the full potential linearized augmented plane-wave method. As a function of increasing pressure, our calculations show a valley feature in Tc, followed by an eventual decrease for higher pressures. We demonstrate that this Tc valley at low pressures is due to two nearby Lifshitz transitions, as we analyze the band-resolved contributions to the e-ph coupling. Below the first Lifshitz transition, the phonon hardening and shrinking of the γ Fermi surface with s -orbital character results in a decreased Tc with increasing pressure. After the second Lifshitz transition, the appearance of δ Fermi surfaces with 3 d -orbital character generate strong e-ph interband couplings in α δ and β δ channels, and hence lead to an increase of Tc. For higher pressures, the phonon hardening finally dominates, and Tc decreases again. Our study reveals that the intriguing Tc valley discovered in experiment can be attributed to Lifshitz transitions, while the plateau of Tc detected at intermediate pressures appears to be beyond the scope of our analysis. This strongly suggests that aside from e-ph coupling, electronic correlations along with plasmonic contributions may be relevant for simple cubic phosphorus. Our findings hint at the notion that increasing pressure can shift the low-energy orbital weight towards d character, and as such even trigger an enhanced importance of orbital-selective electronic correlations despite an increase of the overall bandwidth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeze, R.A.; McWhorter, D.B.
Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a proposed framework for quantifying the degree to which risk is reduced as mass is removed from DNAPL source areas in shallow, saturated, low-permeability media. Risk is defined in terms of meeting an alternate concentration limit (ACL) at a compliance well in an aquifer underlying the sourcemore » zone. The ACL is back-calculated from a carcinogenic health-risk characterization at a downgradient water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phase (aqueous, sorbed, NAPL). Due to the uncertainties in currently available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making specific risk-reduction calculations for individual technologies. Despite the qualitative nature of the exercise, results imply that very high total mass-removal efficiencies are required to achieve significant long-term risk reduction with technology applications of finite duration. This paper is not an argument for no action at contaminated sites. Rather, it provides support for the conclusions of Cherry et al. (1992) that the primary goal of current remediation should be short-term risk reduction through containment, with the aim to pass on to future generations site conditions that are well-suited to the future applications of emerging technologies with improved mass-removal capabilities.« less
Concept of Heat Recovery from Exhaust Gases
NASA Astrophysics Data System (ADS)
Bukowska, Maria; Nowak, Krzysztof; Proszak-Miąsik, Danuta; Rabczak, Sławomir
2017-10-01
The theme of the article is to determine the possibility of waste heat recovery and use it to prepare hot water. The scope includes a description of the existing sample of coal-fired boiler plant, the analysis of working condition and heat recovery proposals. For this purpose, a series of calculations necessary to identify the energy effect of exhaust temperature decreasing and transferring recovery heat to hot water processing. Heat recover solutions from the exhaust gases channel between boiler and chimney section were proposed. Estimation for the cost-effectiveness of such a solution was made. All calculations and analysis were performed for typical Polish conditions, for coal-fired boiler plant. Typicality of this solution is manifested by the volatility of the load during the year, due to distribution of heat for heating and hot water, determining the load variation during the day. Analysed system of three boilers in case of load variation allows to operational flexibility and adaptation of the boilers load to the current heat demand. This adaptation requires changes in the operating conditions of boilers and in particular assurance of properly conditions for the combustion of fuel. These conditions have an impact on the existing thermal loss and the overall efficiency of the boiler plant. On the boiler plant efficiency affects particularly exhaust gas temperature and the excess air factor. Increasing the efficiency of boilers plant is possible to reach by following actions: limiting the excess air factor in coal combustion process in boilers and using an additional heat exchanger in the exhaust gas channel outside of boilers (economizer) intended to preheat the hot water.
Luyssaert, Sebastiaan; Sulkava, Mika; Raitio, Hannu; Hollmén, Jaakko
2004-02-01
This paper introduces the use of nutrition profiles as a first step in the development of a concept that is suitable for evaluating forest nutrition on the basis of large-scale foliar surveys. Nutrition profiles of a tree or stand were defined as the nutrient status, which accounts for all element concentrations, contents and interactions between two or more elements. Therefore a nutrition profile overcomes the shortcomings associated with the commonly used concepts for evaluating forest nutrition. Nutrition profiles can be calculated by means of a neural network, i.e. a self-organizing map, and an agglomerative clustering algorithm with pruning. As an example, nutrition profiles were calculated to describe the temporal variation in the mineral composition of Scots pine and Norway spruce needles in Finland between 1987 and 2000. The temporal trends in the frequency distribution of the nutrition profiles of Scots pine indicated that, between 1987 and 2000, the N, S, P, K, Ca, Mg and Al decreased, whereas the needle mass (NM) increased or remained unchanged. As there were no temporal trends in the frequency distribution of the nutrition profiles of Norway spruce, the mineral composition of the needles of Norway spruce needles subsequently did not change. Interpretation of the (lack of) temporal trends was outside the scope of this example. However, nutrition profiles prove to be a new and better concept for the evaluation of the mineral composition of large-scale surveys only when a biological interpretation of the nutrition profiles can be provided.
Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2018-04-01
ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.
Reduced order modelling in searches for continuous gravitational waves - I. Barycentring time delays
NASA Astrophysics Data System (ADS)
Pitkin, M.; Doolan, S.; McMenamin, L.; Wette, K.
2018-06-01
The frequencies and phases of emission from extra-solar sources measured by Earth-bound observers are modulated by the motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the source's sky-location. Precise knowledge of the modulations are required to coherently track the source's phase over long observations, for example, in pulsar timing, or searches for continuous gravitational waves. The modulations can be modelled as sky-location and time-dependent time delays that convert arrival times at the observer to the inertial frame of the source, which can often be the Solar system barycentre. We study the use of reduced order modelling for speeding up the calculation of this time delay for any sky-location. We find that the time delay model can be decomposed into just four basis vectors, and with these the delay for any sky-location can be reconstructed to sub-nanosecond accuracy. When compared to standard routines for time delay calculation in gravitational wave searches, using the reduced basis can lead to speed-ups of 30 times. We have also studied components of time delays for sources in binary systems. Assuming eccentricities <0.25, we can reconstruct the delays to within 100 s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when interpolating the basis for different orbital periods or time stamps. In long-duration phase-coherent searches for sources with sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, L.; Zaslawsky, M.
Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.
Computational study of some fluoroquinolones: Structural, spectral and docking investigations
NASA Astrophysics Data System (ADS)
Sayin, Koray; Karakaş, Duran; Kariper, Sultan Erkan; Sayin, Tuba Alagöz
2018-03-01
Quantum chemical calculations are performed over norfloxacin, tosufloxacin and levofloxacin. The most stable structures for each molecule are determined by thermodynamic parameters. Then the best level for calculations is determined by benchmark analysis. M062X/6-31 + G(d) level is used in calculations. IR, UV-VIS and NMR spectrum are calculated and examined in detail. Some quantum chemical parameters are calculated and the tendency of activity is recommended. Additionally, molecular docking calculations are performed between related compounds and a protein (ID: 2J9N).
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Numeric calculation of celestial bodies with spreadsheet analysis
NASA Astrophysics Data System (ADS)
Koch, Alexander
2016-04-01
The motion of the planets and moons in our solar system can easily be calculated for any time by the Kepler laws of planetary motion. The Kepler laws are a special case of the gravitational law of Newton, especially if you consider more than two celestial bodies. Therefore it is more basic to calculate the motion by using the gravitational law. But the problem is, that by gravitational law it is not possible to calculate the state of motion with only one step of calculation. The motion has to be numerical calculated for many time intervalls. For this reason, spreadsheet analysis is helpful for students. Skills in programmes like Excel, Calc or Gnumeric are important in professional life and can easily be learnt by students. These programmes can help to calculate the complex motions with many intervalls. The more intervalls are used, the more exact are the calculated orbits. The sutdents will first get a quick course in Excel. After that they calculate with instructions the 2-D-coordinates of the orbits of Moon and Mars. Step by step the students are coding the formulae for calculating physical parameters like coordinates, force, acceleration and velocity. The project is limited to 4 weeks or 8 lessons. So the calcualtion will only include the calculation of one body around the central mass like Earth or Sun. The three-body problem can only be shortly discussed at the end of the project.
Blough, M M; Waggener, R G; Payne, W H; Terry, J A
1998-09-01
A model for calculating mammographic spectra independent of measured data and fitting parameters is presented. This model is based on first principles. Spectra were calculated using various target and filter combinations such as molybdenum/molybdenum, molybdenum/rhodium, rhodium/rhodium, and tungsten/aluminum. Once the spectra were calculated, attenuation curves were calculated and compared to measured attenuation curves. The attenuation curves were calculated and measured using aluminum alloy 1100 or high purity aluminum filtration. Percent differences were computed between the measured and calculated attenuation curves resulting in an average of 5.21% difference for tungsten/aluminum, 2.26% for molybdenum/molybdenum, 3.35% for rhodium/rhodium, and 3.18% for molybdenum/rhodium. Calculated spectra were also compared to measured spectra from the Food and Drug Administration [Fewell and Shuping, Handbook of Mammographic X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1979)] and a comparison will also be presented.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Probability calculations for three-part mineral resource assessments
Ellefsen, Karl J.
2017-06-27
Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.
Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations
NASA Astrophysics Data System (ADS)
Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.
2011-12-01
HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of each timestep and minimize computational overhead. Power generation for each reservoir is estimated using a 2-dimensional regression that accounts for both the available head and turbine efficiency. The object-oriented architecture makes run configuration easy to update. The dynamic model inputs include inflow and meteorological forecasts while static inputs include bathymetry data, reservoir and power generation characteristics, and topological descriptors. Ensemble forecasts of hydrological and meteorological conditions are supplied in real-time by Pacific Northwest National Laboratory and are used as a proxy for uncertainty, which is carried through the simulation and optimization process to produce output that describes the probability that different operational scenario's will be optimal. The full toolset, which includes HydroSCOPE, is currently being tested on the Feather River system in Northern California and the Upper Colorado Storage Project.
1991-03-01
the A parameters; yhatf, to calculate the y-hat statistics; ssrf, to calculate the uncorrected SSR; sstof, to calculate the uncorrected SSTO ; matmulmm...DEGREES OF FREEDOM * int sstocdf, ssrcdf, ssecdf; float ssr, ssto , sse; /* SUM OF SQUARES * float ssrc, sstoc, ssec; float insr, insto, inse; float...Y-HAT STATSISTICS * yhatf(x,beta,stats,n,n); /* CALCULATE UNCORRECTED SSR * ssrf(beta, x, y, mn, n, ss); ssr = ss[l][l]; /* CALCULATE UNCORRECTED SSTO
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true How is the pay rate for COP calculated? 10.216... AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate for COP...
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true How is the pay rate for COP calculated? 10.216... AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate for COP...
Making sense of cancer risk calculators on the web.
Levy, Andrea Gurmankin; Sonnad, Seema S; Kurichi, Jibby E; Sherman, Melani; Armstrong, Katrina
2008-03-01
Cancer risk calculators on the internet have the potential to provide users with valuable information about their individual cancer risk. However, the lack of oversight of these sites raises concerns about low quality and inconsistent information. These concerns led us to evaluate internet cancer risk calculators. After a systematic search to find all cancer risk calculators on the internet, we reviewed the content of each site for information that users should seek to evaluate the quality of a website. We then examined the consistency of the breast cancer risk calculators by having 27 women complete 10 of the breast cancer risk calculators for themselves. We also completed the breast cancer risk calculators for a hypothetical high- and low-risk woman, and compared the output to Surveillance Epidemiology and End Results estimates for the average same-age and same-race woman. Nineteen sites were found, 13 of which calculate breast cancer risk. Most sites do not provide the information users need to evaluate the legitimacy of a website. The breast cancer calculator sites vary in the risk factors they assess to calculate breast cancer risk, how they operationalize each risk factor and in the risk estimate they provide for the same individual. Internet cancer risk calculators have the potential to provide a public health benefit by educating individuals about their risks and potentially encouraging preventive health behaviors. However, our evaluation of internet calculators revealed several problems that call into question the accuracy of the information that they provide. This may lead the users of these sites to make inappropriate medical decisions on the basis of misinformation.
Propellant Mass Fraction Calculation Methodology for Launch Vehicles
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.
LUST ON-LINE CALCULATOR INTRODUCTION
EPA has developed a suite of on-line calculators to assist in performing site assessment and modeling calculations for leaking underground storage tank sites (http://www.epa.gov/athens/onsite). The calculators are divided into four types: parameter estimation, models, scientific...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
Kim, Seungjin; Kang, Seongmin; Lee, Jeongwoo; Lee, Seehyung; Kim, Ki-Hyun; Jeon, Eui-Chan
2016-10-01
In this study, in order to understand accurate calculation of greenhouse gas emissions of urban solid waste incineration facilities, which are major waste incineration facilities, and problems likely to occur at this time, emissions were calculated by classifying calculation methods into 3 types. For the comparison of calculation methods, the waste characteristics ratio, dry substance content by waste characteristics, carbon content in dry substance, and (12)C content were analyzed; and in particular, CO2 concentration in incineration gases and (12)C content were analyzed together. In this study, 3 types of calculation methods were made through the assay value, and by using each calculation method, emissions of urban solid waste incineration facilities were calculated then compared. As a result of comparison, with Calculation Method A, which used the default value as presented in the IPCC guidelines, greenhouse gas emissions were calculated for the urban solid waste incineration facilities A and B at 244.43 ton CO2/day and 322.09 ton CO2/day, respectively. Hence, it showed a lot of difference from Calculation Methods B and C, which used the assay value of this study. It is determined that this was because the default value as presented in IPCC, as the world average value, could not reflect the characteristics of urban solid waste incineration facilities. Calculation Method B indicated 163.31 ton CO2/day and 230.34 ton CO2/day respectively for the urban solid waste incineration facilities A and B; also, Calculation Method C indicated 151.79 ton CO2/day and 218.99 ton CO2/day, respectively. This study intends to compare greenhouse gas emissions calculated using (12)C content default value provided by the IPCC (Intergovernmental Panel on Climate Change) with greenhouse gas emissions calculated using (12)C content and waste assay value that can reflect the characteristics of the target urban solid waste incineration facilities. Also, the concentration and (12)C content were calculated by directly collecting incineration gases of the target urban solid waste incineration facilities, and greenhouse gas emissions of the target urban solid waste incineration facilities through this survey were compared with greenhouse gas emissions, which used the previously calculated assay value of solid waste.
Evaluation of students' knowledge about paediatric dosage calculations.
Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak
2018-01-01
Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it is also seen that students lack maths knowledge in respect of four operations and calculating safe dose range. Relations among the medications suggest that a student wrongly calculating a dosage may also make other errors. Additional courses, exercises or utilisation of different teaching techniques may be suggested to eliminate the deficiencies in terms of basic maths knowledge, problem solving skills and correct dosage calculation of the students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Berg, Derek H
2008-04-01
The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.
Henneberg, M.F.; Strause, J.L.
2002-01-01
This report presents the instructions required to use the Scour Critical Bridge Indicator (SCBI) Code and Scour Assessment Rating (SAR) calculator developed by the Pennsylvania Department of Transportation (PennDOT) and the U.S. Geological Survey to identify Pennsylvania bridges with excessive scour conditions or a high potential for scour. Use of the calculator will enable PennDOT bridge personnel to quickly calculate these scour indices if site conditions change, new bridges are constructed, or new information needs to be included. Both indices are calculated for a bridge simultaneously because they must be used together to be interpreted accurately. The SCBI Code and SAR calculator program is run by a World Wide Web browser from a remote computer. The user can 1) add additional scenarios for bridges in the SCBI Code and SAR calculator database or 2) enter data for new bridges and run the program to calculate the SCBI Code and calculate the SAR. The calculator program allows the user to print the results and to save multiple scenarios for a bridge.
Air and smear sample calculational tool for Fluor Hanford Radiological control
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAUMANN, B.L.
2003-07-11
A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less
5 CFR 1645.3 - Calculation of total net earnings for each TSP Fund.
Code of Federal Regulations, 2010 CFR
2010-01-01
... BOARD CALCULATION OF SHARE PRICES § 1645.3 Calculation of total net earnings for each TSP Fund. (a) Each... be used to calculate the share price for that business day. [70 FR 32214, June 1, 2005] ...
Analytical scheme calculations of angular momentum coupling and recoupling coefficients
NASA Astrophysics Data System (ADS)
Deveikis, A.; Kuznecovas, A.
2007-03-01
We investigate the Scheme programming language opportunities to analytically calculate the Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients that are used in the quantum theory of angular momentum. The considered coefficients are calculated by a direct evaluation of the sum formulas. The calculation results for large values of quantum angular momenta were compared with analogous calculations with FORTRAN and Java programming languages.
Calculation of induced voltages on overhead lines caused by inclined lightning strokes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakakibara, A.
1989-01-01
Equations to calculate the inducing scalar and vector potentials produced by inclined return strokes are shown. Equations are also shown for calculating the induced voltages on overhead lines where horizontal components of inducing vector potential exist. The adequacy of the calculation method is demonstrated by field experiments. Using these equations, induced voltages on overhead lines are calculated for a variety of directions of return strokes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abou El-Maaref, A., E-mail: aahmh@hotmail.com; Allam, S.H.; El-Sherbini, Th.M.
The energy levels, oscillator strengths, line strengths, and transition probabilities for transitions among the terms belonging to the 3s{sup 2}3p{sup 2}, 3s3p{sup 3}, 3s{sup 2}3p3d, 3s{sup 2}3p4s, 3s{sup 2}3p4p and 3s{sup 2}3p4d configurations of silicon-like ions (Zn XVII, Ga XVIII, Ge XIX, and As XX) have been calculated using the configuration-interaction code CIV3. The calculations have been carried out in the intermediate coupling scheme using the Breit–Pauli Hamiltonian. The present calculations have been compared with the available experimental data and other theoretical calculations. Most of our calculations of energy levels and oscillator strengths (in length form) show good agreement withmore » both experimental and theoretical data. Lifetimes of the excited levels have also been calculated. -- Highlights: •We have calculated the fine-structure energy levels of Si-like Zn, Ga, Ge and As. •The calculations are performed using the configuration interaction method (CIV3). •We have calculated the oscillator strengths, line strengths and transition rates. •The wavelengths of the transitions are listed in this article. •We also have made comparisons between our data and other calculations.« less
The number processing and calculation system: evidence from cognitive neuropsychology.
Salguero-Alcañiz, M P; Alameda-Bailén, J R
2015-04-01
Cognitive neuropsychology focuses on the concepts of dissociation and double dissociation. The performance of number processing and calculation tasks by patients with acquired brain injury can be used to characterise the way in which the healthy cognitive system manipulates number symbols and quantities. The objective of this study is to determine the components of the numerical processing and calculation system. Participants consisted of 6 patients with acquired brain injuries in different cerebral localisations. We used Batería de evaluación del procesamiento numérico y el cálculo, a battery assessing number processing and calculation. Data was analysed using the difference in proportions test. Quantitative numerical knowledge is independent from number transcoding, qualitative numerical knowledge, and calculation. Recodification is independent from qualitative numerical knowledge and calculation. Quantitative numerical knowledge and calculation are also independent functions. The number processing and calculation system comprises at least 4 components that operate independently: quantitative numerical knowledge, number transcoding, qualitative numerical knowledge, and calculation. Therefore, each one may be damaged selectively without affecting the functioning of another. According to the main models of number processing and calculation, each component has different characteristics and cerebral localisations. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
National Stormwater Calculator: Low Impact Development ...
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm
Inadvertent Intruder Calculatios for F Tank Farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koffman, L
2005-09-12
Savannah River National Laboratory (SRNL) has been providing radiological performance assessment analysis for Savannah River Site (SRS) solid waste disposal facilities (McDowell-Boyer 2000). The performance assessment considers numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. An Automated Intruder Analysis application was developed by SRNL (Koffman 2004) that simplifies the inadvertent intruder analysis into a routine, automated calculation. Based on SRNL's experience, personnel from Planning Integration & Technology of Closure Business Unitmore » asked SRNL to assist with inadvertent intruder calculations for F Tank Farm to support the development of the Tank Closure Waste Determination Document. Meetings were held to discuss the scenarios to be calculated and the assumptions to be used in the calculations. As a result of the meetings, SRNL was asked to perform four scenario calculations. Two of the scenarios are the same as those calculated by the Automated Intruder Analysis application and these can be calculated directly by providing appropriate inputs. The other two scenarios involve use of groundwater by the intruder and the Automated Intruder Analysis application was adapted to perform these calculations. The four calculations to be performed are: (1) A post-drilling scenario in which the drilling penetrates a transfer line. (2) A calculation of internal exposure due to drinking water from a well located near a waste tank. (3) A post-drilling calculation in which waste is introduced by irrigation of the garden with water from a well located near a waste tank. (4) A resident scenario where a house is built above transfer lines. Note that calculations 1 and 4 use sources from the waste inventory in the transfer line (given in Table 1) whereas calculations 2 and 3 use sources from groundwater beneath the waste tank (given in Appendix B). It is important to recognize that there are two different sources in the calculations. In these calculations, assumptions are made for parameter values. Three key parameters are the size of the garden, the amount of vegetables eaten, and the distance of the well from the waste tank. For these three parameters, different values are considered in the calculations to determine the impact of the change in these parameters. Another key parameter is the length of time of institutional control, which determines when an inadvertent intruder could first be exposed. The standard length of time for institutional control is 100 years from the time of closure. In this analysis, waste inventory values are used from year 2005 but tanks will not be closed until year 2020. Thus, the effective length of time of institutional control used in the calculations is 115 years from year 2005, which is taken to be time zero for radiological decay calculations. All calculations are carried out for a period of 10,000 years.« less
47 CFR 1.1623 - Probability calculation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...
A Serum miR Signature Specific to Low-Risk Prostate Cancer
2017-09-01
useful pre-treatment risk calculators that use clinical parameters (age, biopsy grade, PSA ). These calculators accurately identify high-risk patients...aggressive disease. There are several useful pre-treatment risk calculators that use clinical parameters (age, biopsy grade, PSA ). These calculators
7 CFR 1416.304 - Payment calculations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Payment calculations. 1416.304 Section 1416.304 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY CREDIT CORPORATION, DEPARTMENT... PROGRAMS Citrus Disaster Program § 1416.304 Payment calculations. (a) Payments will be calculated by...
Monte Carlo calculation of skyshine'' neutron dose from ALS (Advanced Light Source)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations.
A cost analysis for the implementation of commonality in the family of commuter airplanes, revised
NASA Technical Reports Server (NTRS)
Creighton, Tom; Haddad, Rafael; Hendrich, Louis; Hensley, Doug; Morgan, Louise; Russell, Mark; Swift, Jerry
1987-01-01
The acquisition costs determined for the NASA family of commute airplanes are presented. The costs of the baseline designs are presented along with the calculated savings due to the commonality in the family. A sensitivity study is also presented to show the major drivers in the acquisition cost calculations. The baseline costs are calculated with the Nicolai method. A comparison is presented of the estimated costs for the commuter family with the actual price for existing commuters. The cost calculations for the engines and counter-rotating propellers are reported. The effects of commonality on acquisition costs are calculated. The sensitivity calculations of the cost to various costing parameters are shown. The calculations for the direct operating costs, with and without commonality are presented.
Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, D.R.
1986-05-27
This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Stumpp, M; Wren, J; Melzner, F; Thorndyke, M C; Dupont, S T
2011-11-01
Anthropogenic CO(2) emissions are acidifying the world's oceans. A growing body of evidence is showing that ocean acidification impacts growth and developmental rates of marine invertebrates. Here we test the impact of elevated seawater pCO(2) (129 Pa, 1271 μatm) on early development, larval metabolic and feeding rates in a marine model organism, the sea urchin Strongylocentrotus purpuratus. Growth and development was assessed by measuring total body length, body rod length, postoral rod length and posterolateral rod length. Comparing these parameters between treatments suggests that larvae suffer from a developmental delay (by ca. 8%) rather than from the previously postulated reductions in size at comparable developmental stages. Further, we found maximum increases in respiration rates of +100% under elevated pCO(2), while body length corrected feeding rates did not differ between larvae from both treatments. Calculating scope for growth illustrates that larvae raised under high pCO(2) spent an average of 39 to 45% of the available energy for somatic growth, while control larvae could allocate between 78 and 80% of the available energy into growth processes. Our results highlight the importance of defining a standard frame of reference when comparing a given parameter between treatments, as observed differences can be easily due to comparison of different larval ages with their specific set of biological characters. Copyright © 2011 Elsevier Inc. All rights reserved.
Potential gains from hospital mergers in Denmark.
Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller
2010-12-01
The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.
Goudra, B; Singh, P M; Borle, A; Gouda, G
2016-01-01
Use of electronic medical record systems has increased in the recent years. Epic is one such system gaining popularity in the USA. Epic is a private company, which invented the electronic documentation system adopted in our hospital. In spite of many presumed advantages, its use is not critically analyzed. Some of the perceived advantages are increased efficiency and protection against litigation as a result of accurate documentation. In this study, retrospective data of 305 patients who underwent endoscopic retrograde cholangiopancreatography (wherein electronic charting was used - "Epic group") were compared with 288 patients who underwent the same procedure with documentation saved on a paper chart ("paper group"). Time of various events involved in the procedure such as anesthesia start, endoscope insertion, endoscope removal, and transfer to the postanesthesia care unit were routinely documented. From this data, the various time durations were calculated. Both "anesthesia start to scope insertion" times and "scope removal to transfer" times were significantly less in the Epic group compared to the paper group. Use of Epic system led to a saving of 4 min of procedure time per patient. However, the mean oxygen saturation was significantly less in the Epic group. In spite of perceived advantages of Epic documentation system, significant hurdles remain with its use. Although the system allows seamless flow of patients, failure to remove all artifacts can lead to errors and become a source of potential litigation hazard.
NASA Astrophysics Data System (ADS)
Fedin, M. A.; Kuvaldin, A. B.; Kuleshov, A. O.; Zhmurko, I. Y.; Akhmetyanov, S. V.
2018-01-01
Calculation methods for induction crucible furnaces with a conductive crucible have been reviewed and compared. The calculation method of electrical and energy characteristics of furnaces with a conductive crucible has been developed and the example of the calculation is shown below. The calculation results are compared with experimental data. Dependences of electrical and power characteristics of the furnace on frequency, inductor current, geometric dimensions and temperature have been obtained.
Alternative Fuels Data Center: Tools
Calculator Compare cost of ownership and emissions for most vehicle models. mobile Petroleum Reduction ROI and payback period for natural gas vehicles and infrastructure. AFLEET Tool Calculate a fleet's , hydrogen, or fuel cell infrastructure. GREET Fleet Footprint Calculator Calculate your fleet's petroleum
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Standard calculation procedure. 434.510 Section 434.510... HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard calculation procedure. 510.1The Standard Calculation Procedure consists of methods and assumptions for...
Electric Calculators; Business Education: 7718.06.
ERIC Educational Resources Information Center
McShane, Jane
The course was developed to instruct students in the use of mechanical and/or electronic printing calculators, electronic display calculators, and rotary calculators to solve special business problems with occupational proficiency. Included in the document are a list of performance objectives, a course content outline, suggested learning…
ORIGEN2 calculations supporting TRIGA irradiated fuel data package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.A.
ORIGEN2 calculations were performed for TRIGA spent fuel elements from the Hanford Neutron Radiography Facility. The calculations support storage and disposal and results include mass, activity,and decay heat. Comparisons with underwater dose-rate measurements were used to confirm and adjust the calculations.
Calculation of evapotranspiration: Recursive and explicit methods
USDA-ARS?s Scientific Manuscript database
Crop yield is proportional to crop evapotranspiration (ETc) and it is important to calculate ETc correctly. Methods to calculate ETc have combined empirical and theoretical approaches. The combination method was used to calculate potential ETp. It is a combination method because it combined the ener...
2011-05-10
A COMPREHENSIVE review of dosage calculation for nursing staff, this covers accurate calculation skills and interpretation of units of measurement in the context of safe medication-administration practice.
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels
Jiang, Nan; Ma, Shaochun
2015-01-01
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels. PMID:28793631
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, Tim; Stagich, Brooke
The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their updated “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides PRGs for radionuclides that are used as a screening tool at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites. These risk-based PRGs establish concentration limits under specific exposure scenarios. The purpose of this verification study is to determine that the calculator has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly. There are 167 equations used inmore » the calculator. To verify the calculator, all equations for each of seven receptor types (resident, construction worker, outdoor and indoor worker, recreator, farmer, and composite worker) were hand calculated using the default parameters. The same four radionuclides (Am-241, Co-60, H-3, and Pu-238) were used for each calculation for consistency throughout.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greebler, P.; Goldman, E.
1962-12-19
Doppler calculations for large fast ceramic reactors (FCR), using recent cross section information and improved methods, are described. Cross sections of U/sup 238/, Pu/sup 239/, and Pu/sup 210/ with fuel temperature variations needed for perturbation calculations of Doppler reactivity changes are tabulated as a function of potential scattering cross section per absorber isotope at energies below 400 kev. These may be used in Doppler calculations for anv fast reactor. Results of Doppler calculations on a large fast ceramic reactor are given to show the effects of the improved calculation methods and of recent cross secrion data on the calculated Dopplermore » coefficient. The updated methods and cross sections used yield a somewhat harder spectrum and accordingly a somewhat smaller Doppler coefficient for a given FCR core size and composition than calculated in earlier work, but they support the essential conclusion derived earlier that the Doppler effect provides an important safety advantage in a large FCR. 28 references. (auth)« less
“Live” Formulations of International Association for the properties of Water and Steam (IAPWS)
NASA Astrophysics Data System (ADS)
Ochkov, V. F.; Orlov, K. A.; Gurke, S.
2017-11-01
Online publication of IAPWS formulations for calculation of the properties of water and steam is reviewed. The advantages of electronic delivery via Internet over traditional publication on paper are examined. Online calculation can be used with or without formulas or equations printed in traditional publications. Online calculations should preferably free of charge and compatible across multiple platforms (Windows, Android, Linux). Other requirements include availability of multilingual interface, traditional math operators and functions, 2D and 3D graphic capabilities, animation, numerical and symbolic math, tools for solving equation systems, local functions, etc. Using of online visualization tools for verification of functions for calculating thermophysical properties of substances is reviewed. Specific examples are provided of tools for the modeling of the properties of chemical substances, including desktop and online calculation software, downloadable online calculations, and calculations that use server technologies such as Mathcad Calculation Server (see the site of National Research University “Moscow Power Engineering Institute”) and SMath (see the site of Knovel, an Elsevier company).
Collisional Ionization Equilibrium for Optically Thin Plasmas
NASA Technical Reports Server (NTRS)
Bryans, P.; Mitthumsiri, W.; Savin, D. W.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.
2006-01-01
Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have utilized state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn. We have also utilized state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Mazzotta et al. (1998), we have calculated improved collisional ionization equilibrium calculations. We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni, and with the fractional abundances derived from the modern DR and RR calculations of Gu (2003a,b, 2004) for Mg, Si, S, Ar, Ca, Fe, and Ni.
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels.
Jiang, Nan; Ma, Shaochun
2015-10-27
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels.
NASA Astrophysics Data System (ADS)
Duan, B.; Bari, M. A.; Wu, Z. Q.; Jun, Y.; Li, Y. M.; Wang, J. G.
2012-11-01
Aims: We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from N IV to Ne VII. Methods: In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results: A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.
Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong
2016-09-01
The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.
Peng, Peng; Namkung, Jessica M.; Fuchs, Douglas; Fuchs, Lynn S.; Patton, Samuel; Yen, Loulee; Compton, Donald L.; Zhang, Wenjuan; Miller, Amanda; Hamlett, Carol
2016-01-01
The purpose of this study was to explore domain-general cognitive skills, domain-specific academic skills, and demographic characteristics that are associated with calculation development from first through third grade among young children with learning difficulties. Participants were 176 children identified with reading and mathematics difficulties at the beginning of first grade. Data were collected on working memory, language, nonverbal reasoning, processing speed, decoding, numerical competence, incoming calculations, socioeconomic status, and gender at the beginning of first grade and on calculation performance at 4 time points: the beginning of first grade, the end of first grade, the end of second grade, and the end of third grade. Latent growth modelling analysis showed that numerical competence, incoming calculation, processing speed, and decoding skills significantly explained the variance of calculation performance at the beginning of first grade. Numerical competence and processing speed significantly explained the variance of calculation performance at the end of third grade. However, numerical competence was the only significant predictor of calculation development from the beginning of first grade to the end of third grade. Implications of these findings for early calculation instructions among young at-risk children are discussed. PMID:27572520
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Nagahama, Yuki; Kakue, Takashi; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Ito, Tomoyoshi
2014-02-01
A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.
An accelerated hologram calculation using the wavefront recording plane method and wavelet transform
NASA Astrophysics Data System (ADS)
Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-06-01
Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.
Theoretical relation between halo current-plasma energy displacement/deformation in EAST
NASA Astrophysics Data System (ADS)
Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen
2018-04-01
In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.
2017-01-01
Binding free energy calculations that make use of alchemical pathways are becoming increasingly feasible thanks to advances in hardware and algorithms. Although relative binding free energy (RBFE) calculations are starting to find widespread use, absolute binding free energy (ABFE) calculations are still being explored mainly in academic settings due to the high computational requirements and still uncertain predictive value. However, in some drug design scenarios, RBFE calculations are not applicable and ABFE calculations could provide an alternative. Computationally cheaper end-point calculations in implicit solvent, such as molecular mechanics Poisson–Boltzmann surface area (MMPBSA) calculations, could too be used if one is primarily interested in a relative ranking of affinities. Here, we compare MMPBSA calculations to previously performed absolute alchemical free energy calculations in their ability to correlate with experimental binding free energies for three sets of bromodomain–inhibitor pairs. Different MMPBSA approaches have been considered, including a standard single-trajectory protocol, a protocol that includes a binding entropy estimate, and protocols that take into account the ligand hydration shell. Despite the improvements observed with the latter two MMPBSA approaches, ABFE calculations were found to be overall superior in obtaining correlation with experimental affinities for the test cases considered. A difference in weighted average Pearson () and Spearman () correlations of 0.25 and 0.31 was observed when using a standard single-trajectory MMPBSA setup ( = 0.64 and = 0.66 for ABFE; = 0.39 and = 0.35 for MMPBSA). The best performing MMPBSA protocols returned weighted average Pearson and Spearman correlations that were about 0.1 inferior to ABFE calculations: = 0.55 and = 0.56 when including an entropy estimate, and = 0.53 and = 0.55 when including explicit water molecules. Overall, the study suggests that ABFE calculations are indeed the more accurate approach, yet there is also value in MMPBSA calculations considering the lower compute requirements, and if agreement to experimental affinities in absolute terms is not of interest. Moreover, for the specific protein–ligand systems considered in this study, we find that including an explicit ligand hydration shell or a binding entropy estimate in the MMPBSA calculations resulted in significant performance improvements at a negligible computational cost. PMID:28786670
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
THE ONSITE ON-LINE CALCULATORS AND TRAINING FOR SUBSURFACE CONTAMINANT TRANSPORT SITE ASSESSMENT
EPA has developed a suite of on-line calculators called "OnSite" for assessing transport of environmental contaminants in the subsurface. The purpose of these calculators is to provide methods and data for common calculations used in assessing impacts from subsurface contaminatio...
Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and
Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator i...
10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...
10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...
10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...
10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
76 FR 39242 - Federal Acquisition Regulation; TINA Interest Calculations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... pricing data. This rule replaces the term ``simple interest'' as the requirement for calculating interest...-AL73 Federal Acquisition Regulation; TINA Interest Calculations AGENCIES: Department of Defense (DoD... interest calculations be applied to Government overpayments as a result of defective cost or pricing data...
Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gough, Sean T.
The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.
A Python tool to set up relative free energy calculations in GROMACS
Klimovich, Pavel V.; Mobley, David L.
2015-01-01
Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189
Neutral Kaon Mixing from Lattice QCD
NASA Astrophysics Data System (ADS)
Bai, Ziyuan
In this work, we report the lattice calculation of two important quantities which emerge from second order, K0 - K¯0 mixing : DeltaMK and epsilonK. The RBC-UKQCD collaboration has performed the first calculation of DeltaMK with unphysical kinematics [1]. We now extend this calculation to near-physical and physical ensembles. In these physical or near-physical calculations, the two-pion energies are below the kaon threshold, and we have to examine the two-pion intermediate states contribution to DeltaMK, as well as the enhanced finite volume corrections arising from these two-pion intermediate states. We also report the ?rst lattice calculation of the long-distance contribution to the indirect CP violation parameter, the epsilonK. This calculation involves the treatment of a short-distance, ultra-violet divergence that is absent in the calculation of DeltaMK, and we will report our techniques for correcting this divergence on the lattice. In this calculation, we used unphysical quark masses on the same ensemble that we used in [1]. Therefore, rather than providing a physical result, this calculation demonstrates the technique for calculating epsilonK, and provides an approximate understanding the size of the long-distance contributions. Various new techniques are employed in this work, such as the use of All-Mode-Averaging (AMA), the All-to-All (A2A) propagators and the use of super-jackknife method in analyzing the data.
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
Math anxiety, self-efficacy, and ability in British undergraduate nursing students.
McMullan, Miriam; Jones, Ray; Lea, Susan
2012-04-01
Nurses need to be able to make drug calculations competently. In this study, involving 229 second year British nursing students, we explored the influence of mathematics anxiety, self-efficacy, and numerical ability on drug calculation ability and determined which factors would best predict this skill. Strong significant relationships (p < .001) existed between anxiety, self-efficacy, and ability. Students who failed the numerical and/or drug calculation ability tests were more anxious (p < .001) and less confident (p ≤ .002) in performing calculations than those who passed. Numerical ability made the strongest unique contribution in predicting drug calculation ability (beta = 0.50, p < .001) followed by drug calculation self-efficacy (beta = 0.16, p = .04). Early testing is recommended for basic numerical skills. Faculty are advised to refresh students' numerical skills before introducing drug calculations. Copyright © 2012 Wiley Periodicals, Inc.
NASA-Lewis experiences with multigroup cross sections and shielding calculations
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
The nuclear reactor shield analysis procedures employed at NASA-Lewis are described. Emphasis is placed on the generation, use, and testing of multigroup cross section data. Although coupled neutron and gamma ray cross section sets are useful in two dimensional Sn transport calculations, much insight has been gained from examination of uncoupled calculations. These have led to experimental and analytic studies of areas deemed to be of first order importance to reactor shield calculations. A discussion is given of problems encountered in using multigroup cross sections in the resolved resonance energy range. The addition to ENDF files of calculated and/or measured neutron-energy-dependent capture gamma ray spectra for shielding calculations is questioned for the resonance region. Anomalies inherent in two dimensional Sn transport calculations which may overwhelm any cross section discrepancies are illustrated.
Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.
2004-01-01
The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.
BASIC Programming In Water And Wastewater Analysis
NASA Technical Reports Server (NTRS)
Dreschel, Thomas
1988-01-01
Collection of computer programs assembled for use in water-analysis laboratories. First program calculates quality-control parameters used in routine water analysis. Second calculates line of best fit for standard concentrations and absorbances entered. Third calculates specific conductance from conductivity measurement and temperature at which measurement taken. Fourth calculates any one of four types of residue measured in water. Fifth, sixth, and seventh calculate results of titrations commonly performed on water samples. Eighth converts measurements, to actual dissolved-oxygen concentration using oxygen-saturation values for fresh and salt water. Ninth and tenth perform calculations of two other common titrimetric analyses. Eleventh calculates oil and grease residue from water sample. Last two use spectro-photometric measurements of absorbance at different wavelengths and residue measurements. Programs included in collection written for Hewlett-Packard 2647F in H-P BASIC.
NASA Astrophysics Data System (ADS)
Wilde-Piorko, M.; Polkowski, M.
2016-12-01
Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Petroleum-equivalent fuel economy calculation. 474.3 Section 474.3 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ELECTRIC AND HYBRID VEHICLE RESEARCH, DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The...
21 CFR 862.2100 - Calculator/data processing module for clinical use.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Calculator/data processing module for clinical use... SERVICES (CONTINUED) MEDICAL DEVICES CLINICAL CHEMISTRY AND CLINICAL TOXICOLOGY DEVICES Clinical Laboratory Instruments § 862.2100 Calculator/data processing module for clinical use. (a) Identification. A calculator...
21 CFR 868.1890 - Predictive pulmonary-function value calculator.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Predictive pulmonary-function value calculator. 868.1890 Section 868.1890 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... pulmonary-function value calculator. (a) Identification. A predictive pulmonary-function value calculator is...
21 CFR 868.1890 - Predictive pulmonary-function value calculator.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Predictive pulmonary-function value calculator. 868.1890 Section 868.1890 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... pulmonary-function value calculator. (a) Identification. A predictive pulmonary-function value calculator is...
Basic and Exceptional Calculation Abilities in a Calculating Prodigy: A Case Study.
ERIC Educational Resources Information Center
Pesenti, Mauro; Seron, Xavier; Samson, Dana; Duroux, Bruno
1999-01-01
Describes the basic and exceptional calculation abilities of a calculating prodigy whose performances were investigated in single- and multi-digit number multiplication, numerical comparison, raising of powers, and short-term memory tasks. Shows how his highly efficient long-term memory storage and retrieval processes, knowledge of calculation…
Decimals, Denominators, Demons, Calculators, and Connections
ERIC Educational Resources Information Center
Sparrow, Len; Swan, Paul
2005-01-01
The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…
Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions
Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark
Comparative PV LCOE calculator | Photovoltaic Research | NREL
Use the Comparative Photovoltaic Levelized Cost of Energy Calculator (Comparative PV LCOE Calculator) to calculate levelized cost of energy (LCOE) for photovoltaic (PV) systems based on cost effect on LCOE to determine whether a proposed technology is cost-effective, perform trade-off analysis
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...
42 CFR 102.82 - Calculation of death benefits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Calculation of death benefits. 102.82 Section 102... COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.82 Calculation of death benefits. (a... paragraph (d) of this section for the death benefit available to dependents. (2) Deceased person means an...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Calculation of annual limitation. 3010.21 Section 3010.21 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.21 Calculation of annual limitation. (a) The calculation...
40 CFR 98.313 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... You must calculate and report the annual process CO2 emissions for each chloride process line using... subpart the process CO2 emissions by operating and maintaining a CEMS according to the Tier 4 Calculation... (General Stationary Fuel Combustion Sources). (b) Calculate and report under this subpart the annual...
40 CFR 98.333 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Stationary Fuel Combustion Sources). (b) Calculate and report under this subpart the process CO2 emissions by... calculate and report the annual process CO2 emissions using the procedures specified in either paragraph (a... and combustion CO2 emissions by operating and maintaining a CEMS according to the Tier 4 Calculation...