Sample records for simple calculations based

  1. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  2. Derivation and use of simple relationships between aerodynamic and optical particle measurements

    USDA-ARS?s Scientific Manuscript database

    A simple relationship, referred to as a mass conversion factor (MCF), is presented to convert optically based particle measurements to mass concentration. It is calculated from filter-based samples and optical particle counter (OPC) data on a daily or sample period basis. The MCF allows for greater ...

  3. Bedside risk estimation of morbidly adherent placenta using simple calculator.

    PubMed

    Maymon, R; Melcer, Y; Pekar-Zlotin, M; Shaked, O; Cuckle, H; Tovbin, J

    2018-03-01

    To construct a calculator for 'bedside' estimation of morbidly adherent placenta (MAP) risk based on ultrasound (US) findings. This retrospective study included all pregnant women with at least one previous cesarean delivery attending in our US unit between December 2013 and January 2017. The examination was based on a scoring system which determines the probability for MAP. The study population included 471 pregnant women, and 41 of whom (8.7%) were diagnosed with MAP. Based on ROC curve, the most effective US criteria for detection of MAP were the presence of the placental lacunae, obliteration of the utero-placental demarcation, and placenta previa. On the multivariate logistic regression analysis, US findings of placental lacunae (OR = 3.5; 95% CI, 1.2-9.5; P = 0.01), obliteration of the utero-placental demarcation (OR = 12.4; 95% CI, 3.7-41.6; P < 0.0001), and placenta previa (OR = 10.5; 95% CI, 3.5-31.3; P < 0.0001) were associated with MAP. By combining these three parameters, the receiver operating characteristic curve was calculated, yielding an area under the curve of 0.93 (95% CI, 0.87-0.97). Accordingly, we have constructed a simple calculator for 'bedside' estimation of MAP risk. The calculator is mounted on the hospital's internet website ( http://www.assafh.org/Pages/PPCalc/index.html ). The risk estimation of MAP varies between 1.5 and 87%. The present calculator enables a simple 'bedside' MAP estimation, facilitating accurate and adequate antenatal risk assessment.

  4. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  5. Using the Graphing Calculator--in Two-Dimensional Motion Plots.

    ERIC Educational Resources Information Center

    Brueningsen, Chris; Bower, William

    1995-01-01

    Presents a series of simple activities involving generalized two-dimensional motion topics to prepare students to study projectile motion. Uses a pair of motion detectors, each connected to a calculator-based-laboratory (CBL) unit interfaced with a standard graphics calculator, to explore two-dimensional motion. (JRH)

  6. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    NASA Astrophysics Data System (ADS)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  7. A Simple Sensor Model for THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Bryant, Robert G.

    2009-01-01

    A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.

  8. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  9. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  10. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  11. SU-C-BRC-05: Monte Carlo Calculations to Establish a Simple Relation of Backscatter Dose Enhancement Around High-Z Dental Alloy to Its Atomic Number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utsunomiya, S; Kushima, N; Katsura, K

    Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cmmore » × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.« less

  12. Calculated quantum yield of photosynthesis of phytoplankton in the Marine Light-Mixed Layers (59 deg N, 21 deg W)

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.

    1995-01-01

    The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.

  13. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    ERIC Educational Resources Information Center

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  14. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  15. Calculating the surface tension of binary solutions of simple fluids of comparable size

    NASA Astrophysics Data System (ADS)

    Zaitseva, E. S.; Tovbin, Yu. K.

    2017-11-01

    A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.

  16. Development of the Workplace Health Savings Calculator: a practical tool to measure economic impact from reduced absenteeism and staff turnover in workplace health promotion.

    PubMed

    Baxter, Siyan; Campbell, Sharon; Sanderson, Kristy; Cazaly, Carl; Venn, Alison; Owen, Carole; Palmer, Andrew J

    2015-09-18

    Workplace health promotion is focussed on improving the health and wellbeing of workers. Although quantifiable effectiveness and economic evidence is variable, workplace health promotion is recognised by both government and business stakeholders as potentially beneficial for worker health and economic advantage. Despite the current debate on whether conclusive positive outcomes exist, governments are investing, and business engagement is necessary for value to be realised. Practical tools are needed to assist decision makers in developing the business case for workplace health promotion programs. Our primary objective was to develop an evidence-based, simple and easy-to-use resource (calculator) for Australian employers interested in workplace health investment figures. Three phases were undertaken to develop the calculator. First, evidence from a literature review located appropriate effectiveness measures. Second, a review of employer-facilitated programs aimed at improving the health and wellbeing of employees was utilised to identify change estimates surrounding these measures, and third, currently available online evaluation tools and models were investigated. We present a simple web-based calculator for use by employers who wish to estimate potential annual savings associated with implementing a successful workplace health promotion program. The calculator uses effectiveness measures (absenteeism and staff turnover rates) and change estimates sourced from 55 case studies to generate the annual savings an employer may potentially gain. Australian wage statistics were used to calculate replacement costs due to staff turnover. The calculator was named the Workplace Health Savings Calculator and adapted and reproduced on the Healthy Workers web portal by the Australian Commonwealth Government Department of Health and Ageing. The Workplace Health Savings Calculator is a simple online business tool that aims to engage employers and to assist participation, development and implementation of workplace health promotion programs.

  17. Development of Gravity Acceleration Measurement Using Simple Harmonic Motion Pendulum Method Based on Digital Technology and Photogate Sensor

    NASA Astrophysics Data System (ADS)

    Yulkifli; Afandi, Zurian; Yohandri

    2018-04-01

    Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.

  18. Numerical calculation of the Fresnel transform.

    PubMed

    Kelly, Damien P

    2014-04-01

    In this paper, we address the problem of calculating Fresnel diffraction integrals using a finite number of uniformly spaced samples. General and simple sampling rules of thumb are derived that allow the user to calculate the distribution for any propagation distance. It is shown how these rules can be extended to fast-Fourier-transform-based algorithms to increase calculation efficiency. A comparison with other theoretical approaches is made.

  19. Jet engine performance enhancement through use of a wave-rotor topping cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Paxson, Daniel E.

    1993-01-01

    A simple model is used to calculate the thermal efficiency and specific power of simple jet engines and jet engines with a wave-rotor topping cycle. The performance of the wave rotor is based on measurements from a previous experiment. Applied to the case of an aircraft flying at Mach 0.8, the calculations show that an engine with a wave rotor topping cycle may have gains in thermal efficiency of approximately 1 to 2 percent and gains in specific power of approximately 10 to 16 percent over a simple jet engine with the same overall compression ratio. Even greater gains are possible if the wave rotor's performance can be improved.

  20. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  1. PVWatts ® Calculator: India (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The PVWatts ® Calculator for India was released by the National Renewable Energy Laboratory in 2013. The online tool estimates electricity production and the monetary value of that production of grid-connected roof- or ground-mounted crystalline silicon photovoltaics systems based on a few simple inputs. This factsheet provides a broad overview of the PVWatts ® Calculator for India.

  2. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  3. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Mathematics Teacher, 1981

    1981-01-01

    The following ideas are presented: plans for constructing a calculator bin rack that provides a place for a school to store and charge calculators; a lesson in geometry based on a news article about salt containers; and a very simple approach to the concept of infinite geometric series. (MP)

  4. Electro-mechanical Properties of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Anantram, M. P.; Yang, Liu; Han, Jie; Liu, J. P.; Saubum Subhash (Technical Monitor)

    1998-01-01

    We present a simple picture to understand the bandgap variation of carbon nanotubes with small tensile and torsional strains, independent of chirality. Using this picture, we are able to predict a simple dependence of d(Bandoap)$/$d(strain) on the value of $(N_x-N_y)*mod 3$, for semiconducting tubes. We also predict a novel change in sign of d(Bandgap)$/$d(strain) as a function of tensile strain arising from a change in the value of $q$ corresponding to the minimum bandgap. These calculations are complemented by calculations of the change in bandgap using energy minimized structures, and some important differences are discussed. The calculations are based on the $i$ electron approximation.

  5. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  6. Magnetic susceptibilities of actinide 3d-metal intermetallic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muniz, R.B.; d'Albuquerque e Castro, J.; Troper, A.

    1988-04-15

    We have numerically calculated the magnetic susceptibilities which appear in the Hartree--Fock instability criterion for actinide 3d transition-metal intermetallic compounds. This calculation is based on a previous tight-binding description of these actinide-based compounds (A. Troper and A. A. Gomes, Phys. Rev. B 34, 6487 (1986)). The parameters of the calculation, which starts from simple tight-binding d and f bands are (i) occupation numbers, (ii) ratio of d-f hybridization to d bandwidth, and (iii) electron-electron Coulomb-type interactions.

  7. Simple and universal model for electron-impact ionization of complex biomolecules

    NASA Astrophysics Data System (ADS)

    Tan, Hong Qi; Mi, Zhaohong; Bettiol, Andrew A.

    2018-03-01

    We present a simple and universal approach to calculate the total ionization cross section (TICS) for electron impact ionization in DNA bases and other biomaterials in the condensed phase. Evaluating the electron impact TICS plays a vital role in ion-beam radiobiology simulation at the cellular level, as secondary electrons are the main cause of DNA damage in particle cancer therapy. Our method is based on extending the dielectric formalism. The calculated results agree well with experimental data and show a good comparison with other theoretical calculations. This method only requires information of the chemical composition and density and an estimate of the mean binding energy to produce reasonably accurate TICS of complex biomolecules. Because of its simplicity and great predictive effectiveness, this method could be helpful in situations where the experimental TICS data are absent or scarce, such as in particle cancer therapy.

  8. Computer supplies insulation recipe for Cookie Company Roof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Roofing contractors no longer have to rely on complicated calculations and educated guesses to determine cost-efficient levels of roof insulation. A simple hand-held calculator and printer offers seven different programs for fast figuring insulation thickness based on job type, roof size, tax rates, and heating and cooling cost factors.

  9. Simple Levelized Cost of Energy (LCOE) Calculator Documentation | Energy

    Science.gov Websites

    Analysis | NREL Simple Levelized Cost of Energy (LCOE) Calculator Documentation Simple Levelized Cost of Energy (LCOE) Calculator Documentation Transparent Cost Database Button This is a simple : 1). Cost and Performance Adjust the sliders to suitable values for each of the cost and performance

  10. Principles of Stagewise Separation Process Calculations: A Simple Algebraic Approach Using Solvent Extraction.

    ERIC Educational Resources Information Center

    Crittenden, Barry D.

    1991-01-01

    A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…

  11. A new algorithm to handle finite nuclear mass effects in electronic calculations: the ISOTOPE program.

    PubMed

    Gonçalves, Cristina P; Mohallem, José R

    2004-11-15

    We report the development of a simple algorithm to modify quantum chemistry codes based on the LCAO procedure, to account for the isotope problem in electronic structure calculations. No extra computations are required compared to standard Born-Oppenheimer calculations. An upgrade of the Gamess package called ISOTOPE is presented, and its applicability is demonstrated in some examples.

  12. A unitary convolution approximation for the impact-parameter dependent electronic energy loss

    NASA Astrophysics Data System (ADS)

    Schiwietz, G.; Grande, P. L.

    1999-06-01

    In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.

  13. A simple formula for predicting claw volume of cattle.

    PubMed

    Scott, T D; Naylor, J M; Greenough, P R

    1999-11-01

    The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.

  14. General Procedure for the Easy Calculation of pH in an Introductory Course of General or Analytical Chemistry

    ERIC Educational Resources Information Center

    Cepriá, Gemma; Salvatella, Luis

    2014-01-01

    All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…

  15. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  16. [FQA: A method for floristic quality assessment based on conservatism of plant species].

    PubMed

    Cao, Li Juan; He, Ping; Wang, Mi; Xui, Jie; Ren, Ying

    2018-04-01

    FQA, which uses the conservatism of plant species for particular habitats and the species richness of plant communities, is a rapid method for the assessment of habitat quality. This method is based on species composition of quadrats and coefficients of conservatism for species which assigned by experts. Floristic Quality Index (FQI) that reflects vegetation integrity and degradation of a site can be calculated by a simple formula and be used for space-time comparison of habitat quality. It has been widely used in more than ten countries including the United States and Canada. This paper presented the principle, calculation formulas and application cases of this method, with the aim to provide a simple, repeatable and comparable method to assess habitat quality for ecological managers and researchers.

  17. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  18. A simple web-based risk calculator (www.anastomoticleak.com) is superior to the surgeon's estimate of anastomotic leak after colon cancer resection.

    PubMed

    Sammour, T; Lewis, M; Thomas, M L; Lawrence, M J; Hunter, A; Moore, J W

    2017-01-01

    Anastomotic leak can be a devastating complication, and early prediction is difficult. The aim of this study is to prospectively validate a simple anastomotic leak risk calculator and compare its predictive value with the estimate of the primary operating surgeon. Consecutive patients undergoing elective or emergency colon cancer surgery with a primary anastomosis over a 1-year period were prospectively included. A recently published anastomotic leak risk nomogram was converted to an online calculator ( www.anastomoticleak.com ). The calculator-derived risk of anastomotic leak and the risk estimated by the primary operating surgeon were recorded at the completion of surgery. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve analysis (AUROC) was performed for both risk estimates. A total of 105 patients were screened for inclusion during the study period, of whom 83 met the inclusion criteria. The overall anastomotic leak rate was 9.6%. The anastomotic leak calculator was highly predictive of anastomotic leak (AUROC 0.84, P = 0.002), whereas the surgeon estimate was not predictive (AUROC 0.40, P = 0.243). A simple anastomotic leak risk calculator is significantly better at predicting anastomotic leak than the estimate of the primary surgeon. Further external validation on a larger data set is required.

  19. The Effectiveness of the Component Impact Test Method for the Side Impact Injury Assessment of the Door Trim

    NASA Astrophysics Data System (ADS)

    Youn, Younghan; Koo, Jeong-Seo

    The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.

  20. The vulnerability of electric equipment to carbon fibers of mixed lengths: An analysis

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    The susceptibility of a stereo amplifier to damage from a spectrum of lengths of graphite fibers was calculated. A simple analysis was developed by which such calculations can be based on test results with fibers of uniform lengths. A statistical analysis was applied for the conversation of data for various logical failure criteria.

  1. An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.

    PubMed

    Ilias, Miroslav; Saue, Trond

    2007-02-14

    The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.

  2. Cosmic microwave background radiation anisotropies in brane worlds.

    PubMed

    Koyama, Kazuya

    2003-11-28

    We propose a new formulation to calculate the cosmic microwave background (CMB) spectrum in the Randall-Sundrum two-brane model based on recent progress in solving the bulk geometry using a low energy approximation. The evolution of the anisotropic stress imprinted on the brane by the 5D Weyl tensor is calculated. An impact of the dark radiation perturbation on the CMB spectrum is investigated in a simple model assuming an initially scale-invariant adiabatic perturbation. The dark radiation perturbation induces isocurvature perturbations, but the resultant spectrum can be quite different from the prediction of simple mixtures of adiabatic and isocurvature perturbations due to Weyl anisotropic stress.

  3. Design and implementation of a simple nuclear power plant simulator

    NASA Astrophysics Data System (ADS)

    Miller, William H.

    1983-02-01

    A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.

  4. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  5. The Digital Shoreline Analysis System (DSAS) Version 4.0 - An ArcGIS extension for calculating shoreline change

    USGS Publications Warehouse

    Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan

    2009-01-01

    The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.

  6. Calculation of Hugoniot properties for shocked nitromethane based on the improved Tsien's EOS

    NASA Astrophysics Data System (ADS)

    Zhao, Bo; Cui, Ji-Ping; Fan, Jing

    2010-06-01

    We have calculated the Hugoniot properties of shocked nitromethane based on the improved Tsien’s equation of state (EOS) that optimized by “exact” numerical molecular dynamic data at high temperatures and pressures. Comparison of the calculated results of the improved Tsien’s EOS with the existed experimental data and the direct simulations show that the behavior of the improved Tsien’s EOS is very good in many aspects. Because of its simple analytical form, the improved Tsien’s EOS can be prospectively used to study the condensed explosive detonation coupling with chemical reaction.

  7. Icing Branch Current Research Activities in Icing Physics

    NASA Technical Reports Server (NTRS)

    Vargas, Mario

    2009-01-01

    Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.

  8. Exposure assessment in health assessments for hand-arm vibration syndrome.

    PubMed

    Mason, H J; Poole, K; Young, C

    2011-08-01

    Assessing past cumulative vibration exposure is part of assessing the risk of hand-arm vibration syndrome (HAVS) in workers exposed to hand-arm vibration and invariably forms part of a medical assessment of such workers. To investigate the strength of relationships between the presence and severity of HAVS and different cumulative exposure metrics obtained from a self-reporting questionnaire. Cumulative exposure metrics were constructed from a tool-based questionnaire applied in a group of HAVS referrals and workplace field studies. These metrics included simple years of vibration exposure, cumulative total hours of all tool use and differing combinations of acceleration magnitudes for specific tools and their daily use, including the current frequency-weighting method contained in ISO 5349-1:2001. Use of simple years of exposure is a weak predictor of HAVS or its increasing severity. The calculation of cumulative hours across all vibrating tools used is a more powerful predictor. More complex calculations based on involving likely acceleration data for specific classes of tools, either frequency weighted or not, did not offer a clear further advantage in this dataset. This may be due to the uncertainty associated with workers' recall of their past tool usage or the variability between tools in the magnitude of their vibration emission. Assessing years of exposure or 'latency' in a worker should be replaced by cumulative hours of tool use. This can be readily obtained using a tool-pictogram-based self-reporting questionnaire and a simple spreadsheet calculation.

  9. On the vertical resolution for near-nadir looking spaceborne rain radar

    NASA Astrophysics Data System (ADS)

    Kozu, Toshiaki

    A definition of radar resolution for an arbitrary direction is proposed and used to calculate the vertical resolution for a near-nadir looking spaceborne rain radar. Based on the calculation result, a scanning strategy is proposed which efficiently distributes the measurement time to each angle bin and thus increases the number of independent samples compared with a simple linear scanning.

  10. 5 CFR 1315.17 - Formulas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Daily simple interest formula. (1) To calculate daily simple interest the following formula may be used... a payment is due on April 1 and the payment is not made until April 11, a simple interest... equation calculates simple interest on any additional days beyond a monthly increment. (3) For example, if...

  11. Dosimetry in x-ray-based breast imaging

    PubMed Central

    Dance, David R; Sechopoulos, Ioannis

    2016-01-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable. PMID:27617767

  12. Dosimetry in x-ray-based breast imaging

    NASA Astrophysics Data System (ADS)

    Dance, David R.; Sechopoulos, Ioannis

    2016-10-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable.

  13. Comments on the variational modified-hypernetted-chain theory for simple fluids

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Yaakov

    1986-02-01

    The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.

  14. A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions

    NASA Astrophysics Data System (ADS)

    Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.

    2016-12-01

    In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.

  15. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  16. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  17. A study on the sensitivity of self-powered neutron detectors (SPNDs)

    NASA Astrophysics Data System (ADS)

    Lee, Wanno; Cho, Gyuseong; Kim, Kwanghyun; Kim, Hee Joon; choi, Yuseon; Park, Moon Chu; Kim, Soongpyung

    2001-08-01

    Self-powered neutron detectors (SPNDs) are widely used in reactors to monitor neutron flux, while they have several advantages such as small size, and relatively simple electronics required in conjunction with those usages, they have some intrinsic problems of the low level of output current-a slow response time and the rapid change of sensitivity-that make it difficult to use for a long term. Monte Carlo simulation was used to calculate the escape probability as a function of the birth position of emitted beta particle for geometry of rhodium-based SPNDs. A simple numerical method calculated the initial generation rate of beta particles and the change of generation rate due to rhodium burnup. Using results of the simulation and the simple numerical method, the burnup profile of rhodium number density and the neutron sensitivity were calculated as a function of burnup time in reactors. This method was verified by the comparison of this and other papers, and data of YGN3.4 (Young Gwang Nuclear plant 3, 4) about the initial sensitivity. In addition, for improvement of some properties of rhodium-based SPNDs, which are currently used, a modified geometry is proposed. The proposed geometry, which is tube-type, is able to increase the initial sensitivity due to increase of the escape probability. The escape probability was calculated by changing the thickness of the insulator and compared solid-type with tube-type about each insulator thickness. The method used here can be applied to the analysis and design of other types of SPNDs.

  18. Conformational equilibria of alkanes in aqueous solution: relationship to water structure near hydrophobic solutes.

    PubMed Central

    Ashbaugh, H S; Garde, S; Hummer, G; Kaler, E W; Paulaitis, M E

    1999-01-01

    Conformational free energies of butane, pentane, and hexane in water are calculated from molecular simulations with explicit waters and from a simple molecular theory in which the local hydration structure is estimated based on a proximity approximation. This proximity approximation uses only the two nearest carbon atoms on the alkane to predict the local water density at a given point in space. Conformational free energies of hydration are subsequently calculated using a free energy perturbation method. Quantitative agreement is found between the free energies obtained from simulations and theory. Moreover, free energy calculations using this proximity approximation are approximately four orders of magnitude faster than those based on explicit water simulations. Our results demonstrate the accuracy and utility of the proximity approximation for predicting water structure as the basis for a quantitative description of n-alkane conformational equilibria in water. In addition, the proximity approximation provides a molecular foundation for extending predictions of water structure and hydration thermodynamic properties of simple hydrophobic solutes to larger clusters or assemblies of hydrophobic solutes. PMID:10423414

  19. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  20. Levelized Cost of Energy Calculator | Energy Analysis | NREL

    Science.gov Websites

    Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls

  1. Program helps quickly calculate deviated well path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, M.P.

    1993-11-22

    A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less

  2. The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghumman, S. S.

    The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.

  3. SU-E-T-24: A Simple Correction-Based Method for Independent Monitor Unit (MU) Verification in Monte Carlo (MC) Lung SBRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Badkul, R; Jiang, H

    2014-06-01

    Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2)more » for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ∼5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.« less

  4. Calculation of the octanol-water partition coefficient of armchair polyhex BN nanotubes

    NASA Astrophysics Data System (ADS)

    Mohammadinasab, E.; Pérez-Sánchez, H.; Goodarzi, M.

    2017-12-01

    A predictive model for determination partition coefficient (log P) of armchair polyhex BN nanotubes by using simple descriptors was built. The relationship between the octanol-water log P and quantum chemical descriptors, electric moments, and topological indices of some armchair polyhex BN nanotubes with various lengths and fixed circumference are represented. Based on density functional theory electric moments and physico-chemical properties of those nanotubes are calculated.

  5. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  6. On the tsunami wave-submerged breakwater interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filianoti, P.; Piscopo, R.

    The tsunami wave loads on a submerged rigid breakwater are inertial. It is the result arising from the simple calculation method here proposed, and it is confirmed by the comparison with results obtained by other researchers. The method is based on the estimate of the speed drop of the tsunami wave passing over the breakwater. The calculation is rigorous for a sinusoidal wave interacting with a rigid submerged obstacle, in the framework of the linear wave theory. This new approach gives a useful and simple tool for estimating tsunami loads on submerged breakwaters.An unexpected novelty come out from a workedmore » example: assuming the same wave height, storm waves are more dangerous than tsunami waves, for the safety against sliding of submerged breakwaters.« less

  7. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  8. Boundary condition computational procedures for inviscid, supersonic steady flow field calculations

    NASA Technical Reports Server (NTRS)

    Abbett, M. J.

    1971-01-01

    Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.

  9. Energy Expansion for the Period of Anharmonic Oscillators by the Method of Lindstedt-Poincare

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2004-01-01

    A simple, straightforward and efficient method is proposed for the calculation of the period of anharmonic oscillators as an energy series. The approach is based on perturbation theory and the method of Lindstedt-Poincare.

  10. A simple method used to evaluate phase-change materials based on focused-ion beam technique

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Wu, Liangcai; Rao, Feng; Song, Zhitang; Lv, Shilong; Zhou, Xilin; Du, Xiaofeng; Cheng, Yan; Yang, Pingxiong; Chu, Junhao

    2013-05-01

    A nanoscale phase-change line cell based on focused-ion beam (FIB) technique has been proposed to evaluate the electrical property of the phase-change material. Thanks to the FIB-deposited SiO2 hardmask, only one etching step has been used during the fabrication process of the cell. Reversible phase-change behaviors are observed in the line cells based on Al-Sb-Te and Ge-Sb-Te films. The low power consumption of the Al-Sb-Te based cell has been explained by theoretical calculation accompanying with thermal simulation. This line cell is considered to be a simple and reliable method in evaluating the application prospect of a certain phase-change material.

  11. 75 FR 57719 - Federal Acquisition Regulation; TINA Interest Calculations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... the term ``simple interest'' as the requirement for calculating interest for TINA cost impacts with.... Revising the date of the clause; and b. Removing from paragraph (e)(1) ``Simple interest'' and adding...) ``Simple interest'' and adding ``Interest compounded daily, as required by 26 U.S.C. 6622,'' in its place...

  12. Comparison of simple additive weighting (SAW) and composite performance index (CPI) methods in employee remuneration determination

    NASA Astrophysics Data System (ADS)

    Karlitasari, L.; Suhartini, D.; Benny

    2017-01-01

    The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.

  13. Novel shortcut estimation method for regeneration energy of amine solvents in an absorption-based carbon capture process.

    PubMed

    Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon

    2015-02-03

    Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  15. A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li

    2009-09-28

    A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.

  16. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.

  17. CALCULATION OF GAMMA SPECTRA IN A PLASTIC SCINTILLATOR FOR ENERGY CALIBRATIONAND DOSE COMPUTATION.

    PubMed

    Kim, Chankyu; Yoo, Hyunjun; Kim, Yewon; Moon, Myungkook; Kim, Jong Yul; Kang, Dong Uk; Lee, Daehee; Kim, Myung Soo; Cho, Minsik; Lee, Eunjoong; Cho, Gyuseong

    2016-09-01

    Plastic scintillation detectors have practical advantages in the field of dosimetry. Energy calibration of measured gamma spectra is important for dose computation, but it is not simple in the plastic scintillators because of their different characteristics and a finite resolution. In this study, the gamma spectra in a polystyrene scintillator were calculated for the energy calibration and dose computation. Based on the relationship between the energy resolution and estimated energy broadening effect in the calculated spectra, the gamma spectra were simply calculated without many iterations. The calculated spectra were in agreement with the calculation by an existing method and measurements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Readily Accessible and Highly Efficient Ferrocene-Based Amino-Phosphine-Alcohol (f-Amphol) Ligands for Iridium-Catalyzed Asymmetric Hydrogenation of Simple Ketones.

    PubMed

    Yu, Jianfei; Duan, Meng; Wu, Weilong; Qi, Xiaotian; Xue, Peng; Lan, Yu; Dong, Xiu-Qin; Zhang, Xumu

    2017-01-18

    We have successfully developed a series of novel and modular ferrorence-based amino-phosphine-alcohol (f-Amphol) ligands, and applied them to iridium-catalyzed asymmetric hydrogenation of various simple ketones to afford the corresponding chiral alcohols with excellent enantioselectivities and conversions (98-99.9 % ee, >99 % conversion, turnover number up to 200 000). Control experiments and density functional theory (DFT) calculations have shown that the hydroxyl group of our f-Amphol ligands played a key role in this asymmetric hydrogenation. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Neutron skyshine calculations for the PDX tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, F.J.; Nigg, D.W.

    1979-01-01

    The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less

  20. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Preparation and Characterization of a Small Library of Thermally-Labile End-Caps for Variable-Temperature Triggering of Self-Immolative Polymers.

    PubMed

    Taimoory, S Maryamdokht; Sadraei, S Iraj; Fayoumi, Rose Anne; Nasri, Sarah; Revington, Matthew; Trant, John F

    2018-04-20

    The reaction between furans and maleimides has increasingly become a method of interest as its reversibility makes it a useful tool for applications ranging from self-healing materials, to self-immolative polymers, to hydrogels for cell culture and for the preparation of bone repair. However, most of these applications have relied on simple monosubstituted furans and simple maleimides and have not extensively evaluated the potential thermal variability inherent in the process that is achievable through simple substrate modification. A small library of cycloadducts suitable for the above applications was prepared, and the temperature dependence of the retro-Diels-Alder processes was determined through in situ 1 H NMR analyses complemented by computational calculations. The practical range of the reported systems ranges from 40 to >110 °C. The cycloreversion reactions are more complex than would be expected based on simple trends expected based on frontier molecular orbital analyses of the materials.

  2. Comparative study of the pentamodal property of four potential pentamode microstructures

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Lu, Xuegang; Liang, Gongying; Xu, Zhuo

    2017-03-01

    In this paper, a numerical comparative study is presented on the pentamodal property of four potential pentamode microstructures (three based on simple cubic and one on body-centered cubic structures) based on phonon band calculations. The finite-element method is employed to calculate the band structures, and the two essential factors of the ratio of bulk modulus B to shear modulus G and the single-mode band gap (SBG) are analyzed to quantitatively evaluate the pentamodal property. The results show that all four structures possess a higher B/G ratio than traditional materials. One of the simple cubic structures exhibits the incomplete SBG, while the three other structures exhibit complete SBG to decouple the compression and shear waves in all propagation directions. Further parametric analyses are presented investigating the effects of geometrical and material parameters on the pentamodal property of these structures. This study provides guidelines for the future design of novel pentamode microstructures possessing a high B/G ratio and a low-frequency broadband SBG.

  3. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  4. Analysis and calculation by integral methods of laminar compressible boundary-layer with heat transfer and with and without pressure gradient

    NASA Technical Reports Server (NTRS)

    Morduchow, Morris

    1955-01-01

    A survey of integral methods in laminar-boundary-layer analysis is first given. A simple and sufficiently accurate method for practical purposes of calculating the properties (including stability) of the laminar compressible boundary layer in an axial pressure gradient with heat transfer at the wall is presented. For flow over a flat plate, the method is applicable for an arbitrarily prescribed distribution of temperature along the surface and for any given constant Prandtl number close to unity. For flow in a pressure gradient, the method is based on a Prandtl number of unity and a uniform wall temperature. A simple and accurate method of determining the separation point in a compressible flow with an adverse pressure gradient over a surface at a given uniform wall temperature is developed. The analysis is based on an extension of the Karman-Pohlhausen method to the momentum and the thermal energy equations in conjunction with fourth- and especially higher degree velocity and stagnation-enthalpy profiles.

  5. Nailfold capillaroscopy for day-to-day clinical use: construction of a simple scoring modality as a clinical prognostic index for digital trophic lesions.

    PubMed

    Smith, Vanessa; De Keyser, Filip; Pizzorni, Carmen; Van Praet, Jens T; Decuman, Saskia; Sulli, Alberto; Deschepper, Ellen; Cutolo, Maurizio

    2011-01-01

    Construction of a simple nailfold videocapillaroscopic (NVC) scoring modality as a prognostic index for digital trophic lesions for day-to-day clinical use. An association with a single simple (semi)-quantitatively scored NVC parameter, mean score of capillary loss, was explored in 71 consecutive patients with systemic sclerosis (SSc), and reliable reduction in the number of investigated fields (F32-F16-F8-F4). The cut-off value of the prognostic index (mean score of capillary loss calculated over a reduced number of fields) for present/future digital trophic lesions was selected by receiver operating curve (ROC) analysis. Reduction in the number of fields for mean score of capillary loss was reliable from F32 to F8 (intraclass correlation coefficient of F16/F32: 0.97; F8/F32: 0.90). Based on ROC analysis, a prognostic index (mean score of capillary loss as calculated over F8) with a cut-off value of 1.67 is proposed. This value has a sensitivity of 72.22/70.00, specificity of 70.59/69.77, positive likelihood ratio of 2.46/2.32 and a negative likelihood ratio of 0.39/0.43 for present/future digital trophic lesions. A simple prognostic index for digital trophic lesions for daily use in SSc clinics is proposed, limited to the mean score of capillary loss as calculated over eight fields (8 fingers, 1 field per finger).

  6. Shape-based reconstruction for transrectal diffuse optical tomography monitoring of photothermal focal therapy of prostate cancer: simulation studies

    NASA Astrophysics Data System (ADS)

    Weersink, Robert A.; Chaudhary, Sahil; Mayo, Kenwrick; He, Jie; Wilson, Brian C.

    2017-04-01

    We develop and demonstrate a simple shape-based approach for diffuse optical tomographic reconstruction of coagulative lesions generated during interstitial photothermal therapy (PTT) of the prostate. The shape-based reconstruction assumes a simple ellipsoid shape, matching the general dimensions of a cylindrical diffusing fiber used for light delivery in current clinical studies of PTT in focal prostate cancer. The specific requirement is to accurately define the border between the photothermal lesion and native tissue as the photothermal lesion grows, with an accuracy of ≤1 mm, so treatment can be terminated before there is damage to the rectal wall. To demonstrate the feasibility of the shape-based diffuse optical tomography reconstruction, simulated data were generated based on forward calculations in known geometries that include the prostate, rectum, and lesions of varying dimensions. The only source of optical contrast between the lesion and prostate was increased scattering in the lesion, as is typically observed with coagulation. With noise added to these forward calculations, lesion dimensions were reconstructed using the shape-based method. This approach for reconstruction is shown to be feasible and sufficiently accurate for lesions that are within 4 mm from the rectal wall. The method was also robust for irregularly shaped lesions.

  7. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  8. Two-band analysis of hole mobility and Hall factor for heavily carbon-doped p-type GaAs

    NASA Astrophysics Data System (ADS)

    Kim, B. W.; Majerfeld, A.

    1996-02-01

    We solve a pair of Boltzmann transport equations based on an interacting two-isotropic-band model in a general way first to get transport parameters corresponding to the relaxation time. We present a simple method to calculate effective relaxation times, separately for each band, which compensate for the inherent deficiencies in using the relaxation time concept for polar optical-phonon scattering. Formulas for calculating momentum relaxation times in the two-band model are presented for all the major scattering mechanisms of p-type GaAs for simple, practical mobility calculations. In the newly proposed theoretical framework, first-principles calculations for the Hall mobility and Hall factor of p-type GaAs at room temperature are carried out with no adjustable parameters in order to obtain direct comparisons between the theory and recently available experimental results. In the calculations, the light-hole-band nonparabolicity is taken into account on the average by the use of energy-dependent effective mass obtained from the kṡp method and valence-band anisotropy is taken partly into account by the use the Wiley's overlap function.. The calculated Hall mobilities show a good agreement with our experimental data for carbon-doped p-GaAs samples in the range of degenerate hole densities. The calculated Hall factors show rH=1.25-1.75 over hole densities of 2×1017-1×1020 cm-3.

  9. An Informal Overview of the Unitary Group Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnad, V.; Escher, J.; Kruse, M.

    The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less

  10. The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.

    ERIC Educational Resources Information Center

    Gilbert, R.

    1979-01-01

    Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)

  11. Wronskian Method for Bound States

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We propose a simple and straightforward method based on Wronskians for the calculation of bound-state energies and wavefunctions of one-dimensional quantum-mechanical problems. We explicitly discuss the asymptotic behaviour of the wavefunction and show that the allowed energies make the divergent part vanish. As illustrative examples we consider…

  12. Drell-Yan Lepton pair production at NNLO QCD with parton showers

    DOE PAGES

    Hoeche, Stefan; Li, Ye; Prestel, Stefan

    2015-04-13

    We present a simple approach to combine NNLO QCD calculations and parton showers, based on the UNLOPS technique. We apply the method to the computation of Drell-Yan lepton-pair production at the Large Hadron Collider. We comment on possible improvements and intrinsic uncertainties.

  13. Statistics Using Just One Formula

    ERIC Educational Resources Information Center

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  14. Calculation of the Schottky barrier and current–voltage characteristics of metal–alloy structures based on silicon carbide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altuhov, V. I., E-mail: altukhovv@mail.ru; Kasyanenko, I. S.; Sankin, A. V.

    2016-09-15

    A simple but nonlinear model of the defect density at a metal–semiconductor interface, when a Schottky barrier is formed by surface defects states localized at the interface, is developed. It is shown that taking the nonlinear dependence of the Fermi level on the defect density into account leads to a Schottky barrier increase by 15–25%. The calculated barrier heights are used to analyze the current–voltage characteristics of n-M/p-(SiC){sub 1–x}(AlN){sub x} structures. The results of calculations are compared to experimental data.

  15. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  16. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  17. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsend, D.W.; Linnhoff, B.

    In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less

  19. A simple derivation of the formula to calculated synthetic long-period seismograms in a heterogeneous Earth by normal mode summation

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1983-01-01

    A simple modification of Gilbert's formula to account for slight lateral heterogeneity of the Earth leads to a convenient formula to calculate synthetic long period seismograms. Partial derivatives are easily calculated, thus the formula is suitable for direct inversion of seismograms for lateral heterogeneity of the Earth.

  20. Novel method of vulnerability assessment of simple landfills area using the multimedia, multipathway and multireceptor risk assessment (3MRA) model, China.

    PubMed

    Yuan, Ying; He, Xiao-Song; Xi, Bei-Dou; Wei, Zi-Min; Tan, Wen-Bing; Gao, Ru-Tai

    2016-11-01

    Vulnerability assessment of simple landfills was conducted using the multimedia, multipathway and multireceptor risk assessment (3MRA) model for the first time in China. The minimum safe threshold of six contaminants (benzene, arsenic (As), cadmium (Cd), hexavalent chromium [Cr(VI)], divalent mercury [Hg(II)] and divalent nickel [Ni(II)]) in landfill and waste pile models were calculated by the 3MRA model. Furthermore, the vulnerability indexes of the six contaminants were predicted based on the model calculation. The results showed that the order of health risk vulnerability index was As > Hg(II) > Cr(VI) > benzene > Cd > Ni(II) in the landfill model, whereas the ecology risk vulnerability index was in the order of As > Hg(II) > Cr(VI) > Cd > benzene > Ni(II). In the waste pile model, the order of health risk vulnerability index was benzene > Hg(II) > Cr(VI) > As > Cd and Ni(II), whereas the ecology risk vulnerability index was in the order of Hg(II) > Cd > Cr(VI) > As > benzene > Ni(II). These results indicated that As, Hg(II) and Cr(VI) were the high risk contaminants for the case of a simple landfill in China; the concentration of these in soil and groundwater around the simple landfill should be strictly monitored, and proper mediation is also recommended for simple landfills with a high concentration of contaminants. © The Author(s) 2016.

  1. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  2. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  3. The Productivity Dilemma in Workplace Health Promotion.

    PubMed

    Cherniack, Martin

    2015-01-01

    Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.

  4. Rotationally and vibrationally inelastic scattering in the rotational IOS approximation. Ultrasimple calculation of total (differential, integral, and transport) cross sections for nonspherical molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, G.A.; Pack, R.T

    1978-02-15

    A simple, direct derivation of the rotational infinite order sudden (IOS) approximation in molecular scattering theory is given. Connections between simple scattering amplitude formulas, choice of average partial wave parameter, and magnetic transitions are reviewed. Simple procedures for calculating cross sections for specific transitions are discussed and many older model formulas are given clear derivations. Total (summed over rotation) differential, integral, and transport cross sections, useful in the analysis of many experiments involving nonspherical molecules, are shown to be exceedingly simple: They are just averages over the potential angle of cross sections calculated using simple structureless spherical particle formulas andmore » programs. In the case of vibrationally inelastic scattering, the IOSA, without further approximation, provides a well-defined way to get fully three dimensional cross sections from calculations no more difficult than collinear calculations. Integral, differential, viscosity, and diffusion cross sections for He-CO/sub 2/ obtained from the IOSA and a realistic intermolecular potential are calculated as an example and compared with experiment. Agreement is good for the complete potential but poor when only its spherical part is used, so that one should never attempt to treat this system with a spherical model. The simplicity and accuracy of the IOSA make it a viable method for routine analysis of experiments involving collisions of nonspherical molecules.« less

  5. Collector modulation in high-voltage bipolar transistor in the saturation mode: Analytical approach

    NASA Astrophysics Data System (ADS)

    Dmitriev, A. P.; Gert, A. V.; Levinshtein, M. E.; Yuferev, V. S.

    2018-04-01

    A simple analytical model is developed, capable of replacing the numerical solution of a system of nonlinear partial differential equations by solving a simple algebraic equation when analyzing the collector resistance modulation of a bipolar transistor in the saturation mode. In this approach, the leakage of the base current into the emitter and the recombination of non-equilibrium carriers in the base are taken into account. The data obtained are in good agreement with the results of numerical calculations and make it possible to describe both the motion of the front of the minority carriers and the steady state distribution of minority carriers across the collector in the saturation mode.

  6. Multiple Contact Dates and SARS Incubation Periods

    PubMed Central

    2004-01-01

    Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684

  7. Using a Simple Parcel Model to Investigate the Haines Index

    Treesearch

    Mary Ann Jenkins; Steven K. Krueger; Ruiyu Sun

    2003-01-01

    The Haines Index (Haines 1988) ia fire-weather index based on stability and moisture conditions of the lower atmosphere that rates the potential for large fire growth or extreme fire behavior. The Hained Index is calculated by adding a temperature term a to a moisture term b.

  8. Simple calculation of ab initio melting curves: Application to aluminum.

    PubMed

    Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean

    2015-03-01

    We present a simple, fast, and promising method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of Lin et al [J. Chem. Phys. 119, 11792 (2003)] and its improved version given by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find the thermodynamic conditions of equal Gibbs free energy between these phases, defining the melting curve. The first benchmark on the face-centered cubic melting curve of aluminum from 0 to 300 GPa demonstrates how to obtain an accuracy of 5%-10%, comparable to the most sophisticated methods, for a much lower computational cost.

  9. BRIEF COMMUNICATION: The negative ion flux across a double sheath at the formation of a virtual cathode

    NASA Astrophysics Data System (ADS)

    McAdams, R.; Bacal, M.

    2010-08-01

    For the case of negative ions from a cathode entering a plasma, the maximum negative ion flux and the positive ion flux before the formation of a virtual cathode have been calculated for particular plasma conditions. The calculation is based on a simple modification of an analysis of electron emission into a plasma containing negative ions. The results are in good agreement with a 1d3v PIC code model.

  10. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  11. Zombie states for description of structure and dynamics of multi-electron systems

    NASA Astrophysics Data System (ADS)

    Shalashilin, Dmitrii V.

    2018-05-01

    Canonical Coherent States (CSs) of Harmonic Oscillator have been extensively used as a basis in a number of computational methods of quantum dynamics. However, generalising such techniques for fermionic systems is difficult because Fermionic Coherent States (FCSs) require complicated algebra of Grassmann numbers not well suited for numerical calculations. This paper introduces a coherent antisymmetrised superposition of "dead" and "alive" electronic states called here Zombie State (ZS), which can be used in a manner of FCSs but without Grassmann algebra. Instead, for Zombie States, a very simple sign-changing rule is used in the definition of creation and annihilation operators. Then, calculation of electronic structure Hamiltonian matrix elements between two ZSs becomes very simple and a straightforward technique for time propagation of fermionic wave functions can be developed. By analogy with the existing methods based on Canonical Coherent States of Harmonic Oscillator, fermionic wave functions can be propagated using a set of randomly selected Zombie States as a basis. As a proof of principles, the proposed Coupled Zombie States approach is tested on a simple example showing that the technique is exact.

  12. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  13. Geometrical correlations in the nucleosomal DNA conformation and the role of the covalent bonds rigidity

    PubMed Central

    Ghorbani, Maryam; Mohammad-Rafiee, Farshid

    2011-01-01

    We develop a simple elastic model to study the conformation of DNA in the nucleosome core particle. In this model, the changes in the energy of the covalent bonds that connect the base pairs of each strand of the DNA double helix, as well as the lateral displacements and the rotation of adjacent base pairs are considered. We show that because of the rigidity of the covalent bonds in the sugar-phosphate backbones, the base pair parameters are highly correlated, especially, strong twist-roll-slide correlation in the conformation of the nucleosomal DNA is vividly observed in the calculated results. This simple model succeeds to account for the detailed features of the structure of the nucleosomal DNA, particularly, its more important base pair parameters, roll and slide, in good agreement with the experimental results. PMID:20972223

  14. Load Carrying Capacity of Metal Dowel Type Connections of Timber Structures

    NASA Astrophysics Data System (ADS)

    Gocál, Jozef

    2014-12-01

    This paper deals with the load-carrying capacity calculation of laterally loaded metal dowel type connections according to Eurocode 5. It is based on analytically derived, relatively complicated mathematical relationships, and thus it can be quite laborious for practical use. The aim is to propose a possible simplification of the calculation. Due to quite a great variability of fasteners' types and the connection arrangements, the attention is paid to the most commonly used nailed connections. There was performed quite an extensive parametric study focused on the calculation of load-carrying capacity of the simple shear and double shear plane nail connections, joining two or three timber parts of softwood or hardwood. Based on the study results, in conclusion there are presented simplifying recommendations for practical design.

  15. An evaluation of rise time characterization and prediction methods

    NASA Technical Reports Server (NTRS)

    Robinson, Leick D.

    1994-01-01

    One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.

  16. General formulation of characteristic time for persistent chemicals in a multimedia environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    1999-02-01

    A simple yet representative method for determining the characteristic time a persistent organic pollutant remains in a multimedia environment is presented. The characteristic time is an important attribute for assessing long-term health and ecological impacts of a chemical. Calculating the characteristic time requires information on decay rates in multiple environmental media as well as the proportion of mass in each environmental medium. The authors explore the premise that using a steady-state distribution of the mass in the environment provides a means to calculate a representative estimate of the characteristic time while maintaining a simple formulation. Calculating the steady-state mass distributionmore » incorporates the effect of advective transport and nonequilibrium effects resulting from the source terms. Using several chemicals, they calculate and compare the characteristic time in a representative multimedia environment for dynamic, steady-state, and equilibrium multimedia models, and also for a single medium model. They demonstrate that formulating the characteristic time based on the steady-state mass distribution in the environment closely approximates the dynamic characteristic time for a range of chemicals and thus can be used in decisions regarding chemical use in the environment.« less

  17. Analysis of the power flow in nonlinear oscillators driven by random excitation using the first Wiener kernel

    NASA Astrophysics Data System (ADS)

    Hawes, D. H.; Langley, R. S.

    2018-01-01

    Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.

  18. A new accounting system for financial balance based on personnel cost after the introduction of a DPC/DRG system.

    PubMed

    Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu

    2011-04-01

    A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.

  19. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  1. A simple method for calculating the characteristics of the Dutch roll motion of an airplane

    NASA Technical Reports Server (NTRS)

    Klawans, Bernard B

    1956-01-01

    A simple method for calculating the characteristics of the Dutch roll motion of an airplane is obtained by arranging the lateral equations of motion in such form and order that an iterative process is quickly convergent.

  2. Ab initio excited states from the in-medium similarity renormalization group

    NASA Astrophysics Data System (ADS)

    Parzuchowski, N. M.; Morris, T. D.; Bogner, S. K.

    2017-04-01

    We present two new methods for performing ab initio calculations of excited states for closed-shell systems within the in-medium similarity renormalization group (IMSRG) framework. Both are based on combining the IMSRG with simple many-body methods commonly used to target excited states, such as the Tamm-Dancoff approximation (TDA) and equations-of-motion (EOM) techniques. In the first approach, a two-step sequential IMSRG transformation is used to drive the Hamiltonian to a form where a simple TDA calculation (i.e., diagonalization in the space of 1 p 1 h excitations) becomes exact for a subset of eigenvalues. In the second approach, EOM techniques are applied to the IMSRG ground-state-decoupled Hamiltonian to access excited states. We perform proof-of-principle calculations for parabolic quantum dots in two dimensions and the closed-shell nuclei 16O and 22O. We find that the TDA-IMSRG approach gives better accuracy than the EOM-IMSRG when calculations converge, but it is otherwise lacking the versatility and numerical stability of the latter. Our calculated spectra are in reasonable agreement with analogous EOM-coupled-cluster calculations. This work paves the way for more interesting applications of the EOM-IMSRG approach to calculations of consistently evolved observables such as electromagnetic strength functions and nuclear matrix elements, and extensions to nuclei within one or two nucleons of a closed shell by generalizing the EOM ladder operator to include particle-number nonconserving terms.

  3. A simple derivation of the formula to calculate synthetic long-period seismograms in a heterogeneous earth by normal mode summation

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1984-01-01

    A simple modification of Gilbert's formula to account for slight lateral heterogeneity of the earth leads to a convenient formula to calculate synthetic long period seismograms. Partial derivatives are easily calculated, thus the formula is suitable for direct inversion of seismograms for lateral heterogeneity of the earth. Previously announced in STAR as N83-29893

  4. Time-driven activity-based costing to identify opportunities for cost reduction in pediatric appendectomy.

    PubMed

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2016-12-01

    As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Calculation of the bending stresses in helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    De Guillenchmidt, P

    1951-01-01

    A comparatively rapid method is presented for determining theoretically the bending stresses of helicopter rotor blades in forward flight. The method is based on the analysis of the properties of a vibrating beam, and its uniqueness lies in the simple solution of the differential equation which governs the motion of the bent blades.

  6. A simple quasi-diabatization scheme suitable for spectroscopic problems based on one-electron properties of interacting states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com

    We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less

  7. Redox-iodometry: a new potentiometric method.

    PubMed

    Gottardi, Waldemar; Pfleiderer, Jörg

    2005-07-01

    A new iodometric method for quantifying aqueous solutions of iodide-oxidizing and iodine-reducing substances, as well as plain iodine/iodide solutions, is presented. It is based on the redox potential of said solutions after reaction with iodide (or iodine) of known initial concentration. Calibration of the system and calculations of unknown concentrations was performed on the basis of developed algorithms and simple GWBASIC-programs. The method is distinguished by a short analysis time (2-3 min) and a simple instrumentation consisting of pH/mV meter, platinum and reference electrodes. In general the feasible concentration range encompasses 0.1 to 10(-6) mol/L, although it goes down to 10(-8) mol/L (0.001 mg Cl2/L) for oxidants like active chlorine compounds. The calculated imprecision and inaccuracy of the method were found to be 0.4-0.9% and 0.3-0.8%, respectively, resulting in a total error of 0.5-1.2%. Based on the experiments, average imprecisions of 1.0-1.5% at c(Ox)>10(-5) M, 1.5-3% at 10(-5) to 10(-7) M, and 4-7% at <10(-7) M were found. Redox-iodometry is a simple, precise, and time-saving substitute for the more laborious and expensive iodometric titration method, which, like other well-established colorimetric procedures, is clearly outbalanced at low concentrations; this underlines the practical importance of redox-iodometry.

  8. Relative validity of a web-based food frequency questionnaire for patients with type 1 and type 2 diabetes in Denmark

    PubMed Central

    Bentzen, S M R; Knudsen, V K; Christiensen, T; Ewers, B

    2016-01-01

    Background: Diet has an important role in the management of diabetes. However, little is known about dietary intake in Danish diabetes patients. A food frequency questionnaire (FFQ) focusing on most relevant nutrients in diabetes including carbohydrates, dietary fibres and simple sugars was developed and validated. Objectives: To examine the relative validity of nutrients calculated by a web-based food frequency questionnaire for patients with diabetes. Design: The FFQ was validated against a 4-day pre-coded food diary (FD). Intakes of nutrients were calculated. Means of intake were compared and cross-classifications of individuals according to intake were performed. To assess the agreement between the two methods, Pearson and Spearman's correlation coefficients and weighted kappa coefficients were calculated. Subjects: Ninety patients (64 with type 1 diabetes and 26 with type 2 diabetes) accepted to participate in the study. Twenty-six were excluded from the final study population. Setting: 64 volunteer diabetes patients at the Steno Diabetes Center. Results: Intakes of carbohydrates, simple sugars, dietary fibres and total energy were higher according to the FFQ compared with the FD. However, intakes of nutrients were grossly classified in the same or adjacent quartiles with an average of 82% of the selected nutrients when comparing the two methods. In general, moderate agreement between the two methods was found. Conclusion: The FFQ was validated for assessment of a range of nutrients. Comparing the intakes of selected nutrients (carbohydrates, dietary fibres and simple sugars), patients were classified correctly according to low and high intakes. The FFQ is a reliable dietary assessment tool to use in research and evaluation of patient education for patients with diabetes. PMID:27669176

  9. Comparison of Two Methods for Calculating the Frictional Properties of Articular Cartilage Using a Simple Pendulum and Intact Mouse Knee Joints

    PubMed Central

    Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.

    2009-01-01

    In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Abild-Pedersen, Frank

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  12. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  13. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  14. A gauge-independent zeroth-order regular approximation to the exact relativistic Hamiltonian—Formulation and applications

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-01-01

    A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.

  15. Dynamic Characteristics of a Simple Brayton Cryocycle

    NASA Astrophysics Data System (ADS)

    Kutzschbach, A.; Kauschke, M.; Haberstroh, Ch.; Quack, H.

    2006-04-01

    The goal of the overall program is to develop a dynamic numerical model of helium refrigerators and the associated cooling systems based on commercial simulation software. The aim is to give system designers a tool to search for optimum control strategies during the construction phase of the refrigerator with the help of a plant "simulator". In a first step, a simple Brayton refrigerator has been investigated, which consists of a compressor, an after-cooler, a counter-current heat exchanger, a turboexpander and a heat source. Operating modes are "refrigeration" and "liquefaction". Whereas for the steady state design only component efficiencies are needed and mass and energy balances have to be calculated, for the dynamic calculation one needs also the thermal masses and the helium inventory. Transient mass and energy balances have to be formulated for many small elements and then solved simultaneously for all elements. Starting point of the simulation of the Brayton cycle is the steady state operation at design conditions. The response of the system to step and cyclic changes of the refrigeration or liquefaction rate are calculated and characterized.

  16. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  17. Hyperheat: a thermal signature model for super- and hypersonic missiles

    NASA Astrophysics Data System (ADS)

    van Binsbergen, S. A.; van Zelderen, B.; Veraar, R. G.; Bouquet, F.; Halswijk, W. H. C.; Schleijpen, H. M. A.

    2017-10-01

    In performance prediction of IR sensor systems for missile detection, apart from the sensor specifications, target signatures are essential variables. Very often, for velocities up to Mach 2-2.5, a simple model based on the aerodynamic heating of a perfect gas was used to calculate the temperatures of missile targets. This typically results in an overestimate of the target temperature with correspondingly large infrared signatures and detection ranges. Especially for even higher velocities, this approach is no longer accurate. Alternatives like CFD calculations typically require more complex sets of inputs and significantly more computing power. The MATLAB code Hyperheat was developed to calculate the time-resolved skin temperature of axisymmetric high speed missiles during flight, taking into account the behaviour of non-perfect gas and proper heat transfer to the missile surface. Allowing for variations in parameters like missile shape, altitude, atmospheric profile, angle of attack, flight duration and super- and hypersonic velocities up to Mach 30 enables more accurate calculations of the actual target temperature. The model calculates a map of the skin temperature of the missile, which is updated over the flight time of the missile. The sets of skin temperature maps are calculated within minutes, even for >100 km trajectories, and can be easily converted in thermal infrared signatures for further processing. This paper discusses the approach taken in Hyperheat. Then, the thermal signature of a set of typical missile threats is calculated using both the simple aerodynamic heating model and the Hyperheat code. The respective infrared signatures are compared, as well as the difference in the corresponding calculated detection ranges.

  18. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  19. Student understanding of pH: "i don't know what the log actually is, i only know where the button is on my calculator".

    PubMed

    Watters, Dianne J; Watters, James J

    2006-07-01

    In foundation biochemistry and biological chemistry courses, a major problem area that has been identified is students' lack of understanding of pH, acids, bases, and buffers and their inability to apply their knowledge in solving acid/base problems. The aim of this study was to explore students' conceptions of pH and their ability to solve problems associated with the behavior of biological acids to understand the source of student difficulties. The responses given by most students are characteristic of an atomistic approach in which they pay no attention to the structure of the problem and concentrate only on juggling the elements together until they get a solution. Many students reported difficulty in understanding what the question was asking and were unable to interpret a simple graph showing the pH activity profile of an enzyme. The most startling finding was the lack of basic understanding of logarithms and the inability of all except one student to perform a simple calculation on logs without a calculator. This deficiency in high school mathematical skills severely hampered their understanding of pH. This study has highlighted a widespread deficiency in basic mathematical skills among first year undergraduates and a fragmented understanding of acids and bases. Implications for the way in which the concepts of pH and buffers are taught are discussed. Copyright © 2006 International Union of Biochemistry and Molecular Biology, Inc.

  20. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  1. SU-F-T-267: A Clarkson-Based Independent Dose Verification for the Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagata, H; Juntendo University, Hongo, Tokyo; Hongo, H

    2016-06-15

    Purpose: There have been few reports for independent dose verification for Tomotherapy. We evaluated the accuracy and the effectiveness of an independent dose verification system for the Tomotherapy. Methods: Simple MU Analysis (SMU, Triangle Product, Ishikawa, Japan) was used as the independent verification system and the system implemented a Clarkson-based dose calculation algorithm using CT image dataset. For dose calculation in the SMU, the Tomotherapy machine-specific dosimetric parameters (TMR, Scp, OAR and MLC transmission factor) were registered as the machine beam data. Dose calculation was performed after Tomotherapy sinogram from DICOM-RT plan information was converted to the information for MUmore » and MLC location at more segmented control points. The performance of the SMU was assessed by a point dose measurement in non-IMRT and IMRT plans (simple target and mock prostate plans). Subsequently, 30 patients’ treatment plans for prostate were compared. Results: From the comparison, dose differences between the SMU and the measurement were within 3% for all cases in non-IMRT plans. In the IMRT plan for the simple target, the differences (Average±1SD) were −0.70±1.10% (SMU vs. TPS), −0.40±0.10% (measurement vs. TPS) and −1.20±1.00% (measurement vs. SMU), respectively. For the mock prostate, the differences were −0.40±0.60% (SMU vs. TPS), −0.50±0.90% (measurement vs. TPS) and −0.90±0.60% (measurement vs. SMU), respectively. For patients’ plans, the difference was −0.50±2.10% (SMU vs. TPS). Conclusion: A Clarkson-based independent dose verification for the Tomotherapy can be clinically available as a secondary check with the similar tolerance level of AAPM Task group 114. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  2. Accurate reporting of adherence to inhaled therapies in adults with cystic fibrosis: methods to calculate “normative adherence”

    PubMed Central

    Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J

    2016-01-01

    Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can be tested empirically in clinical trials in which there is careful definition of treatment regimens related to key patient characteristics, alongside accurate measurement of health outcomes. PMID:27284242

  3. Double simple-harmonic-oscillator formulation of the thermal equilibrium of a fluid interacting with a coherent source of phonons

    NASA Technical Reports Server (NTRS)

    Defacio, B.; Vannevel, Alan; Brander, O.

    1993-01-01

    A formulation is given for a collection of phonons (sound) in a fluid at a non-zero temperature which uses the simple harmonic oscillator twice; one to give a stochastic thermal 'noise' process and the other which generates a coherent Glauber state of phonons. Simple thermodynamic observables are calculated and the acoustic two point function, 'contrast' is presented. The role of 'coherence' in an equilibrium system is clarified by these results and the simple harmonic oscillator is a key structure in both the formulation and the calculations.

  4. Highly selective and sensitive determination of Cu2+ in drink and water samples based on a 1,8-diaminonaphthalene derived fluorescent sensor

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Li, Yang; Niu, Qingfen; Li, Tianduo; Liu, Yan

    2018-04-01

    A new simple and efficient fluorescent sensor L based on 1,8-diaminonaphthalene Schiff-base for highly sensitive and selective determination of Cu2+ in drink and water has been developed. This Cu2+-selective detection over other tested metal ions displayed an obvious color change from blue to colorless easily detected by naked eye. The detection limit is determined to be as low as 13.2 nM and the response time is very fast within 30 s. The 1:1 binding mechanism was well confirmed by fluorescence measurements, IR analysis and DFT calculations. Importantly, this sensor L was employed for quick detection of Cu2+ in drink and environmental water samples with satisfactory results, providing a simple, rapid, reliable and feasible Cu2+-sensing method.

  5. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  6. Effect of Boundary Conditions on the Axial Compression Buckling of Homogeneous Orthotropic Composite Cylinders in the Long Column Range

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Nemeth, Michael P.; Oremont, Leonard; Jegley, Dawn C.

    2011-01-01

    Buckling loads for long isotropic and laminated cylinders are calculated based on Euler, Fluegge and Donnell's equations. Results from these methods are presented using simple parameters useful for fundamental design work. Buckling loads for two types of simply supported boundary conditions are calculated using finite element methods for comparison to select cases of the closed form solution. Results indicate that relying on Donnell theory can result in an over-prediction of buckling loads by as much as 40% in isotropic materials.

  7. Nonlinear analysis of NPP safety against the aircraft attack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk; Králik, Juraj, E-mail: kralik@fa.stuba.sk

    The paper presents the nonlinear probabilistic analysis of the reinforced concrete buildings of nuclear power plant under the aircraft attack. The dynamic load is defined in time on base of the airplane impact simulations considering the real stiffness, masses, direction and velocity of the flight. The dynamic response is calculated in the system ANSYS using the transient nonlinear analysis solution method. The damage of the concrete wall is evaluated in accordance with the standard NDRC considering the spalling, scabbing and perforation effects. The simple and detailed calculations of the wall damage are compared.

  8. A Simple Method for Calculating Clebsch-Gordan Coefficients

    ERIC Educational Resources Information Center

    Klink, W. H.; Wickramasekara, S.

    2010-01-01

    This paper presents a simple method for calculating Clebsch-Gordan coefficients for the tensor product of two unitary irreducible representations (UIRs) of the rotation group. The method also works for multiplicity-free irreducible representations appearing in the tensor product of any number of UIRs of the rotation group. The generalization to…

  9. Associations between Verbal Learning Slope and Neuroimaging Markers across the Cognitive Aging Spectrum.

    PubMed

    Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L

    2015-07-01

    A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.

  10. HENRY'S LAW CALCULATOR

    EPA Science Inventory

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  11. Accuracy of two simple methods for estimation of thyroidal {sup 131}I kinetics for dosimetry-based treatment of Graves' disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125

    2009-04-15

    One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less

  12. igun - A program for the simulation of positive ion extraction including magnetic fields

    NASA Astrophysics Data System (ADS)

    Becker, R.; Herrmannsfeldt, W. B.

    1992-04-01

    igun is a program for the simulation of positive ion extraction from plasmas. It is based on the well known program egun for the calculation of electron and ion trajectories in electron guns and lenses. The mathematical treatment of the plasma sheath is based on a simple analytical model, which provides a numerically stable calculation of the sheath potentials. In contrast to other ion extraction programs, igun is able to determine the extracted ion current in succeeding cycles of iteration by itself. However, it is also possible to set values of current, plasma density, or ion current density. Either axisymmetric or rectangular coordinates can be used, including axisymmetric or transverse magnetic fields.

  13. Air pollution from future giant jetports

    NASA Technical Reports Server (NTRS)

    Fay, J. A.

    1970-01-01

    Because aircraft arrive and depart in a generally upwind direction, the pollutants are deposited in a narrow corridor extending downwind of the airport. Vertical mixing in the turbulent atmosphere will not dilute such a trail, since the pollutants are distributed vertically during the landing and take-off operations. As a consequence, airport pollution may persist twenty to forty miles downwind without much attenuation. Based on this simple meteorological model, calculations of the ambient levels of nitric oxide and particulates to be expected downwind of a giant jetport show them to be about equal to those in present urban environments. These calculations are based on measured emission rates from jet engines and estimates of aircraft performance and traffic for future jetports.

  14. Vapor-phase deposition of polymers as a simple and versatile technique to generate paper-based microfluidic platforms for bioassay applications.

    PubMed

    Demirel, Gokhan; Babur, Esra

    2014-05-21

    Given their simplicity and functionality, paper-based microfluidic systems are considered to be ideal and promising bioassay platforms for use in less developed countries or in point-of-care services. Although a series of innovative techniques have recently been demonstrated for the fabrication of such platforms, development of simple, inexpensive and versatile new strategies are still needed in order to reach their full potential. In this communication, we describe a simple yet facile approach to fabricate paper-based sensor platforms with a desired design through a vapor-phase polymer deposition technique. We also show that the fabricated platforms could be readily employed for the detection of various biological target molecules including glucose, protein, ALP, ALT, and uric acid. The limit of detection for each target molecule was calculated to be 25 mg dL(-1) for glucose, 1.04 g L(-1) for protein, 7.81 unit per L for ALP, 1.6 nmol L(-1) for ALT, and 0.13 mmol L(-1) for uric acid.

  15. Earth Impact Effects Program: Estimating the Regional Environmental Consequences of Impacts On Earth

    NASA Astrophysics Data System (ADS)

    Collins, G. S.; Melosh, H. J.; Marcus, R. A.

    2009-12-01

    The Earth Impact Effects Program (www.lpl.arizona.edu/impacteffects) is a popular web-based calculator for estimating the regional environmental consequences of a comet or asteroid impact on Earth. It is widely used, both by inquisitive members of the public as an educational device and by scientists as a simple research tool. It applies a variety of scaling laws, based on theory, nuclear explosion test data, observations from terrestrial and extraterrestrial craters and the results of small-scale impact experiments and numerical modelling, to quantify the principal hazards that might affect the people, buildings and landscape in the vicinity of an impact. The program requires six inputs: impactor diameter, impactor density, impact velocity prior to atmospheric entry, impact angle, and the target type (sedimentary rock, crystalline rock, or a water layer above rock), as well as the distance from the impact at which the environmental effects are to be calculated. The program includes simple algorithms for estimating the fate of the impactor during atmospheric traverse, the thermal radiation emitted by the impact plume (fireball) and the intensity of seismic shaking. The program also approximates various dimensions of the impact crater and ejecta deposit, as well as estimating the severity of the air blast in both crater-forming and airburst impacts. We illustrate the strengths and limitations of the program by comparing its predictions (where possible) against known impacts, such as Carancas, Peru (2007); Tunguska, Siberia (1908); Barringer (Meteor) crater, Arizona (ca 49 ka). These tests demonstrate that, while adequate for large impactors, the simple approximation of atmospheric entry in the original program does not properly account for the disruption and dispersal of small impactors as they traverse Earth's atmosphere. We describe recent improvements to the calculator to better describe atmospheric entry of small meteors; the consequences of oceanic impacts; and the recurrance interval between impacts of a given size. In addition, we assess the potential regional hazard of hypothetical impact scenarios of different scales. Our simple calculator suggests that the most wide-reaching regional hazard is seismic shaking: both ejecta-deposit thickness and airblast pressure decay much more rapidly with distance than seismic ground motion. Close to the impact site the most severe hazard is from thermal radiation; however, the curvature of the Earth implies that distant localities are shielded from direct thermal radiation because the fireball is below the horizon.

  16. Shear, principal, and equivalent strains in equal-channel angular deformation

    NASA Astrophysics Data System (ADS)

    Xia, K.; Wang, J.

    2001-10-01

    The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.

  17. Comparing energy payback and simple payback period for solar photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Kessler, Will

    2017-11-01

    Installing a solar photovoltaic (PV) array is both an environmental and a financial decision. The financial arguments often take priority over the environmental because installing solar is capital-intensive. The Simple Payback period (SPB) is often assessed prior to the adoption of solar PV at a residence or a business. Although it better describes the value of solar PV electricity in terms of sustainability, the Energy Payback period (EPB) is seldom used to gauge the merits of an installation. Using published estimates of embodied energies, EPB was calculated for four solar PV plants utilizing crystalline-Si technology: three being actual commercial installations located in the northeastern U.S., and a fourth installation based on a simulated 20-kilowatt roof-mounted system, in Wrocław, Poland. Simple Payback was calculated based on initial capital cost, and on the availability of avoided electricity costs based on net-metering tariffs, which at present in the U.S. are 1:1 credit ratio, and in Poland is 1:0.7 credit ratio. For all projects, the EPB time was estimated at between 1.9 and 2.6 years. In contrast, the SPB for installed systems in the northeastern U.S. ranged from 13.3 to 14.6 years, and was estimated at 13.5 years for the example system in Lower Silesia, Poland. The comparison between SPB and EPB shows a disparity between motivational time frames, in which the wait for financial return is considerably longer than the wait for net energy harvest and the start of sustainable power production.

  18. Influence of tidal fluctuations in the water table and methods applied in the calculation of hydrogeological parameters. The case of Motril-Salobreña coastal aquifer

    NASA Astrophysics Data System (ADS)

    Sánchez Úbeda, Juan Pedro; Calvache Quesada, María Luisa; Duque Calvache, Carlos; López Chicano, Manuel; Martín Rosales, Wenceslao

    2013-04-01

    The hydraulic properties of coastal aquifer are essential for any estimation of groundwater flow with simple calculations or modelling techniques. Usually the application of slug test or tracers test are the techniques selected for solving the uncertainties. Other methods are based on the information associated to the changes induced by tidal fluctuation in coastal zones. The Tidal Response Method is a simple technique based in two different factors, tidal efficiency factor and time lag of the tidal oscillation regarding to hydraulic head oscillation caused into the aquifer. This method was described for a homogeneous and isotropic confined aquifer; however, it's applicable to unconfined aquifers when the ratio of maximum water table fluctuation and the saturated aquifer thickness is less than 0.02. Moreover, the tidal equations assume that the tidal signal follows a sinusoidal wave, but actually, the tidal wave is a set of simple harmonic components. Due to this, another methods based in the Fourier series have been applied in earlier studies trying to describe the tidal wave. Nevertheless, the Tidal Response Method represents an acceptable and useful technique in the Motril-Salobreña coastal aquifer. From recently hydraulic head data sets at discharge zone of the Motril-Salobreña aquifer have been calculated transmissivity values using different methods based in the tidal fluctuations and its effects on the hydraulic head. The effects of the tidal oscillation are detected in two boreholes of 132 m and 38 m depth located 300 m to the coastline. The main difficulties for the application of the method were the consideration of a confined aquifer and the variation of the effect at different depths (that is not included into the tidal equations), but these troubles were solved. In one hand, the assumption that the storage coefficient (S) in this unconfined aquifer is close to confined aquifers values due to the hydrogeological conditions at high depth and without saturation changes. In the other hand, we have monitored hydraulic head fluctuations due to tidal oscillations in different shallow boreholes close to the shoreline, and comparing with the deep ones. The calculated values with the tidal efficiency factor in the deep boreholes are about one less order of magnitude regarding to the obtained results with time lag method. Nevertheless, the application of these calculation methods based on tidal response in unconfined aquifers provides knowledge about the characteristics of the discharge zone and groundwater flow patterns, and it may be an easy and profitable alternative to traditional pumping tests.

  19. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  20. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wide-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid 2-6 semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  1. Evaluating BTEX concentration in soil using a simple one-dimensional vado zone model: application to a new fuel station in Valencia (Spain)

    NASA Astrophysics Data System (ADS)

    Rodrigo-Ilarri, Javier; Rodrigo-Clavero, María-Elena

    2017-04-01

    Specific studies of the impact of fuel spills on the vadose zone are currently required when trying to obtain the environmental permits for new fuel stations. The development of One-Dimensional mathematical models of fate and transport of BTEX on the vadose zone can therefore be used to understand the behavior of the pollutants under different scenarios. VLEACH - a simple One-Dimensional Finite Different Vadose Zone Leaching Model - uses an numerical approximation of the Millington Equation, a theoretical based model for gaseous diffusion in porous media. This equation has been widely used in the fields of soil physics and hydrology to calculate the gaseous or vapor diffusion in porous media. The model describes the movement of organic contaminants within and between three different phases: (1) as a solute dissolved in water, (2) as a gas in the vapor phase, and (3) as an absorbed compound in the soil phase. Initially, the equilibrium distribution of contaminant mass between liquid, gas and sorbed phases is calculated. Transport processes are then simulated. Liquid advective transport is calculated based on values defined by the user for infiltration and soil water content. The contaminant in the vapor phase migrates into or out of adjacent cells based on the calculated concentration gradients that exist between adjacent cells. After the mass is exchanged between the cells, the total mass in each cell is recalculated and re-equilibrated between the different phases. At the end of the simulation, (1) an overall area-weighted groundwater impact for the entire modeled area and (2) the concentration profile of BTEX on the vadose zone are calculated. This work shows the results obtained when applying VLEACH to analyze the contamination scenario caused by a BTEX spill coming from a set of future underground storage tanks located on a new fuel station in Aldaia (Valencia region - Spain).

  2. Comparison of the costs of active surveillance and immediate surgery in the management of low-risk papillary microcarcinoma of the thyroid.

    PubMed

    Oda, Hitomi; Miyauchi, Akira; Ito, Yasuhiro; Sasai, Hisanori; Masuoka, Hiroo; Yabuta, Tomonori; Fukushima, Mitsuhiro; Higashiyama, Takuya; Kihara, Minoru; Kobayashi, Kaoru; Miya, Akihiro

    2017-01-30

    The incidence of thyroid cancer is increasing rapidly in many countries, resulting in rising societal costs of the care of thyroid cancer. We reported that the active surveillance of low-risk papillary microcarcinoma had less unfavorable events than immediate surgery, while the oncological outcomes of these managements were similarly excellent. Here we calculated the medical costs of these two managements. We created a model of the flow of these managements, based on our previous study. The flow and costs include the step of diagnosis, surgery, prescription of medicine, recurrence, salvage surgery for recurrence, and care for 10 years after the diagnosis. The costs were calculated according to the typical clinical practices at Kuma Hospital performed under the Japanese Health Care Insurance System. If conversion surgeries were not considered, the 'simple cost' of active surveillance for 10 years was 167,780 yen/patient. If there were no recurrences, the 'simple cost' of immediate surgery was calculated as 794,770 yen/patient to 1,086,070 yen/patient, depending on the type of surgery and postoperative medication. The 'simple cost' of surgery was 4.7 to 6.5 times the 'simple cost' of surveillance. When conversion surgeries and recurrence were considered, the 'total cost' of active surveillance for 10 years became 225,695 yen/patient. When recurrence were considered, the 'total cost' of immediate surgery was 928,094 yen/patient, which was 4.1 times the 'total cost' of the active surveillance. At Kuma Hospital in Japan, the 10-year total cost of immediate surgery was 4.1 times expensive than active surveillance.

  3. ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL

    EPA Science Inventory

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  4. Theoretical Prediction of Magnetism in C-doped TlBr

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Haller, E. E.; Chrzan, D. C.

    2014-05-01

    We predict that C, N, and O dopants in TlBr can display large, localized magnetic moments. Density functional theory based electronic structure calculations show that the moments arise from partial filling of the crystal-field-split localized p states of the dopant atoms. A simple model is introduced to explain the magnitude of the moments.

  5. Density correlators in a self-similar cascade

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz˙; Ewski, J.

    1999-09-01

    Multivariate density moments (correlators) of arbitrary order are obtained for the multiplicative self-similar cascade. This result is based on the calculation by Greiner, Eggers and Lipa where the correlators of the logarithms of the particle densities have been obtained. The density correlators, more suitable for comparison with multiparticle data, appear to have a simple factorizable form.

  6. The Productivity Dilemma in Workplace Health Promotion

    PubMed Central

    Cherniack, Martin

    2015-01-01

    Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374

  7. Technical Note: A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, M.; Schulz-Hanke, M.; Garcia Alba, J.; Jurisch, N.; Hagemann, U.; Sachs, T.; Sommer, M.; Augustin, J.

    2015-08-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. Thus, serious challenges are constitutes in terms of the mechanistic process understanding, the identification of potential environmental drivers and the calculation of reliable CH4 emission estimates. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components, which helps facilitating the identification of underlying dynamics and potential environmental drivers. Flux separation is based on ebullition related sudden concentration changes during single measurements. A variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R-script, adjusted for the purpose of CH4 flux calculation. The algorithm was tested using flux measurement data (July to September 2013) from a former fen grassland site, converted into a shallow lake as a result of rewetting ebullition and diffusion contributed 46 and 55 %, respectively, to total CH4 emissions, which is comparable to those previously reported by literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period.

  8. Fast and robust estimation of ophthalmic wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Dillon, Keith

    2016-12-01

    Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.

  9. Gradient corrections to the exchange-correlation free energy

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-07

    We develop the first-order gradient correction to the exchange-correlation free energy of the homogeneous electron gas for use in finite-temperature density functional calculations. Based on this, we propose and implement a simple temperature-dependent extension for functionals beyond the local density approximation. These finite-temperature functionals show improvement over zero-temperature functionals, as compared to path-integral Monte Carlo calculations for deuterium equations of state, and perform without computational cost increase compared to zero-temperature functionals and so should be used for finite-temperature calculations. Furthermore, while the present functionals are valid at all temperatures including zero, non-negligible difference with zero-temperature functionals begins at temperatures abovemore » 10 000 K.« less

  10. Canonical Representations of the Simple Map

    NASA Astrophysics Data System (ADS)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima; Boozer, Allen

    2007-11-01

    The simple map is the simplest map that has the topology of a divertor tokamak. The simple map has three canonical representations: (i) toroidal flux and poloidal angle (ψ,θ) as canonical coordinates, (ii) the physical variables (R,Z) or (X,Y) as canonical coordinates, and (iii) the action-angle (J,ζ) or magnetic variables (ψ,θ) as canonical coordinates. We give the derivation of the simple map in the (X,Y) representation. The simple map in this representation has been studied extensively (Ref. 1 and references therein). We calculate the magnetic coordinates for the simple map, construct the simple map in magnetic coordinates, and calculate generic topological effects of magnetic perturbations in divertor tokamaks using the map. We also construct the simple map in (ψ,θ) representation. Preliminary results of these studies will be presented. This work is supported by US DOE OFES DE-FG02-01ER54624 and DE-FG02-04ER54793. [1] A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys Lett A 364 140--145 (2007).

  11. [Development and practice evaluation of blood acid-base imbalance analysis software].

    PubMed

    Chen, Bo; Huang, Haiying; Zhou, Qiang; Peng, Shan; Jia, Hongyu; Ji, Tianxing

    2014-11-01

    To develop a blood gas, acid-base imbalance analysis computer software to diagnose systematically, rapidly, accurately and automatically determine acid-base imbalance type, and evaluate the clinical application. Using VBA programming language, a computer aided diagnostic software for the judgment of acid-base balance was developed. The clinical data of 220 patients admitted to the Second Affiliated Hospital of Guangzhou Medical University were retrospectively analyzed. The arterial blood gas [pH value, HCO(3)(-), arterial partial pressure of carbon dioxide (PaCO₂)] and electrolytes included data (Na⁺ and Cl⁻) were collected. Data were entered into the software for acid-base imbalances judgment. At the same time the data generation was calculated manually by H-H compensation formula for determining the type of acid-base imbalance. The consistency of judgment results from software and manual calculation was evaluated, and the judgment time of two methods was compared. The clinical diagnosis of the types of acid-base imbalance for the 220 patients: 65 cases were normal, 90 cases with simple type, mixed type in 41 cases, and triplex type in 24 cases. The accuracy of the judgment results of the normal and triplex types from computer software compared with which were calculated manually was 100%, the accuracy of the simple type judgment was 98.9% and 78.0% for the mixed type, and the total accuracy was 95.5%. The Kappa value of judgment result from software and manual judgment was 0.935, P=0.000. It was demonstrated that the consistency was very good. The time for software to determine acid-base imbalances was significantly shorter than the manual judgment (seconds:18.14 ± 3.80 vs. 43.79 ± 23.86, t=7.466, P=0.000), so the method of software was much faster than the manual method. Software judgment can replace manual judgment with the characteristics of rapid, accurate and convenient, can improve work efficiency and quality of clinical doctors and has great clinical application promotion value.

  12. Rare-gas impurities in alkali metals: Relation to optical absorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meltzer, D.E.; Pinski, F.J.; Stocks, G.M.

    1988-04-15

    An investigation of the nature of rare-gas impurity potentials in alkali metals is performed. Results of calculations based on simple models are presented, which suggest the possibility of resonance phenomena. These could lead to widely varying values for the exponents which describe the shape of the optical-absorption spectrum at threshold in the Mahan--Nozieres--de Dominicis theory. Detailed numerical calculations are then performed with the Korringa-Kohn-Rostoker coherent-potential-approximation method. The results of these highly realistic calculations show no evidence for the resonance phenomena, and lead to predictions for the shape of the spectra which are in contradiction to observations. Absorption and emission spectramore » are calculated for two of the systems studied, and their relation to experimental data is discussed.« less

  13. Calculation of surface enthalpy of solids from an ab initio electronegativity based model: case of ice.

    PubMed

    Douillard, J M; Henry, M

    2003-07-15

    A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.

  14. Elastic and viscoelastic model of the stress history of sedimentary rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A model has been developed to calculate the elastic and viscoelastic stresses which develop in rocks at depth due to burial, uplift and diagenesis. This model includes the effect of the overburden load, tectonic or geometric strains, thermal strains, varying material properties, pore pressure variations, and viscoeleastic relaxation. Calculations for some simple examples are given to show the contributions of the individual stress components due to gravity, tectonics, thermal effects and pore pressure. A complete stress history for Mesaverde rocks in the Piceance basin is calculated based on available burial history, thermal history and expected pore pressure, material property andmore » tectonic strain variations through time. These calculations show the importance of including material property changes and viscoelastic effects. 15 refs., 48 figs.« less

  15. AtomicChargeCalculator: interactive web-based calculation of atomic charges in large biomolecular complexes and drug-like molecules.

    PubMed

    Ionescu, Crina-Maria; Sehnal, David; Falginella, Francesco L; Pant, Purbaj; Pravda, Lukáš; Bouchal, Tomáš; Svobodová Vařeková, Radka; Geidl, Stanislav; Koča, Jaroslav

    2015-01-01

    Partial atomic charges are a well-established concept, useful in understanding and modeling the chemical behavior of molecules, from simple compounds, to large biomolecular complexes with many reactive sites. This paper introduces AtomicChargeCalculator (ACC), a web-based application for the calculation and analysis of atomic charges which respond to changes in molecular conformation and chemical environment. ACC relies on an empirical method to rapidly compute atomic charges with accuracy comparable to quantum mechanical approaches. Due to its efficient implementation, ACC can handle any type of molecular system, regardless of size and chemical complexity, from drug-like molecules to biomacromolecular complexes with hundreds of thousands of atoms. ACC writes out atomic charges into common molecular structure files, and offers interactive facilities for statistical analysis and comparison of the results, in both tabular and graphical form. Due to high customizability and speed, easy streamlining and the unified platform for calculation and analysis, ACC caters to all fields of life sciences, from drug design to nanocarriers. ACC is freely available via the Internet at http://ncbr.muni.cz/ACC.

  16. An accelerator-based Boron Neutron Capture Therapy (BNCT) facility based on the 7Li(p,n)7Be

    NASA Astrophysics Data System (ADS)

    Musacchio González, Elizabeth; Martín Hernández, Guido

    2017-09-01

    BNCT (Boron Neutron Capture Therapy) is a therapeutic modality used to irradiate tumors cells previously loaded with the stable isotope 10B, with thermal or epithermal neutrons. This technique is capable of delivering a high dose to the tumor cells while the healthy surrounding tissue receive a much lower dose depending on the 10B biodistribution. In this study, therapeutic gain and tumor dose per target power, as parameters to evaluate the treatment quality, were calculated. The common neutron-producing reaction 7Li(p,n)7Be for accelerator-based BNCT, having a reaction threshold of 1880.4 keV, was considered as the primary source of neutrons. Energies near the reaction threshold for deep-seated brain tumors were employed. These calculations were performed with the Monte Carlo N-Particle (MCNP) code. A simple but effective beam shaping assembly (BSA) was calculated producing a high therapeutic gain compared to previously proposed facilities with the same nuclear reaction.

  17. Simple measurement-based admission control for DiffServ access networks

    NASA Astrophysics Data System (ADS)

    Lakkakorpi, Jani

    2002-07-01

    In order to provide good Quality of Service (QoS) in a Differentiated Services (DiffServ) network, a dynamic admission control scheme is definitely needed as an alternative to overprovisioning. In this paper, we present a simple measurement-based admission control (MBAC) mechanism for DiffServ-based access networks. Instead of using active measurements only or doing purely static bookkeeping with parameter-based admission control (PBAC), the admission control decisions are based on bandwidth reservations and periodically measured & exponentially averaged link loads. If any link load on the path between two endpoints is over the applicable threshold, access is denied. Link loads are periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The information needed in calculating the link loads is retrieved from the router statistics. The proposed admission control mechanism is verified through simulations. Our results prove that it is possible to achieve very high bottleneck link utilization levels and still maintain good QoS.

  18. A multi-institutional study of independent calculation verification in inhomogeneous media using a simple and effective method of heterogeneity correction integrated with the Clarkson method.

    PubMed

    Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori

    2018-05-21

    In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.

  19. Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik

    2015-09-15

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less

  20. Dietary guidelines in the Czech Republic. II.: Nutritional profiles of food groups.

    PubMed

    Brázdová, Z; Fiala, J; Bauerová, J; Mullerová, D

    2000-11-01

    Modern dietary guidelines set in terms of food groups are easy to use and understand for target populations, but rather complicated from the point of view of quantification, i.e. the correctly set number of recommended servings in different population groups according to age, sex, physical activity and physiological status on the basis of required intake of energy and individual nutrients. It is the use of abstract comprehensive food groups that makes it impossible to use a simple database of food tables based on the content of nutrients in individual foods, rather than their groups. Using groups requires that their nutritional profiles be established, i.e. that an average content of nutrients and energy for individual groups be calculated. To calculate nutritional profiles for Czech dietary guidelines, the authors used three different methods: (1) Simple profiles, with all commodities with significant representation in the Czech food basket represented in equal amounts. (2) Profiles based on typical servings, with the same commodities as in (1) but in characteristic intake quantities (typical servings). (3) Food basket-based profiles with commodities constituting the Czech food basket in quantities identical for that basket. The results showed significant differences in profiles calculated by different methods. Calculated nutrient intakes were particularly influenced by the size of typical servings and it is therefore essential that a realistic size of servings be used in calculations. The consistent use of recommended food items throughout all food groups and subgroups is very important. The number of servings of foods from the five food groups is not enough if a suitable food item is not chosen within individual groups. On the basis of their findings, the authors fully recommend the use of nutritional profiles based on typical servings that give a realistic idea of the probable energy and nutrient content in the recommended daily intake. In view of regional cultural differences, national nutritional profiles play a vital importance. Population studies investigating the size of the typical servings and the most frequently occurring commodities in the food basket should be made every three years. Nutritional profiles designed in this way constitute an important starting point for setting national dietary guidelines, their implementation and revisions.

  1. ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL

    EPA Science Inventory

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  2. Measuring the depth of the caudal epidural space to prevent dural sac puncture during caudal block in children.

    PubMed

    Lee, Hyun Jeong; Min, Ji Young; Kim, Hyun Il; Byon, Hyo-Jin

    2017-05-01

    Caudal blocks are performed through the sacral hiatus in order to provide pain control in children undergoing lower abdominal surgery. During the block, it is important to avoid advancing the needle beyond the sacrococcygeal ligament too much to prevent unintended dural puncture. This study used demographic data to establish simple guidelines for predicting a safe needle depth in the caudal epidural space in children. A total of 141 children under 12 years old who had undergone lumbar-sacral magnetic resonance imaging were included. The T2 sagittal image that provided the best view of the sacrococcygeal membrane and the dural sac was chosen. We used Picture Achieving and Communication System (Centricity ® PACS, GE Healthcare Co.) to measure the distance between the sacrococcygeal ligament and the dural sac, the length of the sacrococcygeal ligament, and the maximum depth of the caudal space. There were strong correlations between age, weight, height, and BSA, and the distance between the sacrococcygeal ligament and dural sac, as well as the length of the sacrococcygeal ligament. Based on these findings, a simple formula to calculate the distance between the sacrococcygeal ligament and dural sac was developed: 25 × BSA (mm). This simple formula can accurately calculate the safe depth of the caudal epidural space to prevent unintended dural puncture during caudal block in children. However, further clinical studies based on this formula are needed to substantiate its utility. © 2017 John Wiley & Sons Ltd.

  3. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  4. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  5. 12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... following simple formula: APY=100 (Interest/Principal) Examples (1) If an institution pays $61.68 in... percentage yield is 5.39%, using the simple formula: APY=100(134.75/2,500) APY=5.39% For $15,000, interest is... Yield Calculation The annual percentage yield measures the total amount of interest paid on an account...

  6. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  7. New non-naturally reductive Einstein metrics on exceptional simple Lie groups

    NASA Astrophysics Data System (ADS)

    Chen, Huibin; Chen, Zhiqi; Deng, Shaoqiang

    2018-01-01

    In this article, we construct several non-naturally reductive Einstein metrics on exceptional simple Lie groups, which are found through the decomposition arising from generalized Wallach spaces. Using the decomposition corresponding to the two involutions, we calculate the non-zero coefficients in the formulas of the components of Ricci tensor with respect to the given metrics. The Einstein metrics are obtained as solutions of a system of polynomial equations, which we manipulate by symbolic computations using Gröbner bases. In particular, we discuss the concrete numbers of non-naturally reductive Einstein metrics for each case up to isometry and homothety.

  8. Application of adjusted data in calculating fission-product decay energies and spectra

    NASA Astrophysics Data System (ADS)

    George, D. C.; Labauve, R. J.; England, T. R.

    1982-06-01

    The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.

  9. Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method

    ERIC Educational Resources Information Center

    Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen

    2017-01-01

    The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…

  10. Assessing canopy cover over streets and sidewalks in street tree populations

    Treesearch

    S.E. Maco; E.G. McPherson

    2002-01-01

    Total canopy cover and canopy cover over street and sidewalk surfaces were estimated for street trees in Davis, California, U.S. Calculations were made using simple trigonometric equations based on the results of a sample inventory. Canopy cover from public trees over streets and sidewalks varied between 4% and 46% by city zone, averaging 14% citywide. Consideration of...

  11. Dense matter theory: A simple classical approach

    NASA Astrophysics Data System (ADS)

    Savić, P.; Čelebonović, V.

    1994-07-01

    In the sixties, the first author and by P. Savić and R. Kašanin started developing a mean-field theory of dense matter. It is based on the Coulomb interaction, supplemented by a microscopic selection rule and a set of experimentally founded postulates. Applications of the theory range from the calculation of models of planetary internal structure to DAC experiments.

  12. Calculation and visualization of free energy barriers for several VOCs and TNT in HKUST-1.

    PubMed

    Sarkisov, Lev

    2012-11-28

    A simple protocol based on a lattice representation of the porous space is proposed to locate and characterize the free energy bottle-necks in rigid metal organic frameworks. As an illustration we apply this method to HKUST-1 to demonstrate that there are impassable free energy barriers for molecules of trinitrotoluene in this structure.

  13. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  14. Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Bolonkin, Alexander; Gilyard, Glenn B.

    1999-01-01

    Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.

  15. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less

  16. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  17. Heat Transfer to Surfaces of Finite Catalytic Activity in Frozen Dissociated Hypersonic Flow

    NASA Technical Reports Server (NTRS)

    Chung, Paul M.; Anderson, Aemer D.

    1961-01-01

    The heat transfer due to catalytic recombination of a partially dissociated diatomic gas along the surfaces of two-dimensional and axisymmetric bodies with finite catalytic efficiencies is studied analytically. An integral method is employed resulting in simple yet relatively complete solutions for the particular configurations considered. A closed form solution is derived which enables one to calculate atom mass-fraction distribution, therefore catalytic heat transfer distribution, along the surface of a flat plate in frozen compressible flow with and without transpiration. Numerical calculations are made to determine the atom mass-fraction distribution along an axisymmetric conical body with spherical nose in frozen hypersonic compressible flow. A simple solution based on a local similarity concept is found to be in good agreement with these numerical calculations. The conditions are given for which the local similarity solution is expected to be satisfactory. The limitations on the practical application of the analysis to the flight of the blunt bodies in the atmosphere are discussed. The use of boundary-layer theory and the assumption of frozen flow restrict application of the analysis to altitudes between about 150,000 and 250,000 feet.

  18. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  19. Offner stretcher aberrations revisited to compensate material dispersion

    NASA Astrophysics Data System (ADS)

    Vyhlídka, Štěpán; Kramer, Daniel; Meadows, Alexander; Rus, Bedřich

    2018-05-01

    We present simple analytical formulae for the calculation of the spectral phase and residual angular dispersion of an ultrashort pulse propagating through the Offner stretcher. Based on these formulae, we show that the radii of curvature of both convex and concave mirrors in the Offner triplet can be adapted to tune the fourth order dispersion term of the spectral phase of the pulse. As an example, a single-grating Offner stretcher design suitable for the suppression of material dispersion in the Ti:Sa PALS laser system is proposed. The results obtained by numerical raytracing well match those calculated from the analytical formulae.

  20. Collisional-radiative switching - A powerful technique for converging non-LTE calculations

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.; Voels, S. A.

    1988-01-01

    A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.

  1. ReSTART: A Novel Framework for Resource-Based Triage in Mass-Casualty Events.

    PubMed

    Mills, Alex F; Argon, Nilay T; Ziya, Serhan; Hiestand, Brian; Winslow, James

    2014-01-01

    Current guidelines for mass-casualty triage do not explicitly use information about resource availability. Even though this limitation has been widely recognized, how it should be addressed remains largely unexplored. The authors present a novel framework developed using operations research methods to account for resource limitations when determining priorities for transportation of critically injured patients. To illustrate how this framework can be used, they also develop two specific example methods, named ReSTART and Simple-ReSTART, both of which extend the widely adopted triage protocol Simple Triage and Rapid Treatment (START) by using a simple calculation to determine priorities based on the relative scarcity of transportation resources. The framework is supported by three techniques from operations research: mathematical analysis, optimization, and discrete-event simulation. The authors? algorithms were developed using mathematical analysis and optimization and then extensively tested using 9,000 discrete-event simulations on three distributions of patient severity (representing low, random, and high acuity). For each incident, the expected number of survivors was calculated under START, ReSTART, and Simple-ReSTART. A web-based decision support tool was constructed to help providers make prioritization decisions in the aftermath of mass-casualty incidents based on ReSTART. In simulations, ReSTART resulted in significantly lower mortality than START regardless of which severity distribution was used (paired t test, p<.01). Mean decrease in critical mortality, the percentage of immediate and delayed patients who die, was 8.5% for low-acuity distribution (range ?2.2% to 21.1%), 9.3% for random distribution (range ?0.2% to 21.2%), and 9.1% for high-acuity distribution (range ?0.7% to 21.1%). Although the critical mortality improvement due to ReSTART was different for each of the three severity distributions, the variation was less than 1 percentage point, indicating that the ReSTART policy is relatively robust to different severity distributions. Taking resource limitations into account in mass-casualty situations, triage has the potential to increase the expected number of survivors. Further validation is required before field implementation; however, the framework proposed in here can serve as the foundation for future work in this area. 2014.

  2. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  3. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  4. Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect

    NASA Astrophysics Data System (ADS)

    Wilson, Derrek J.; Gea-Banacloche, Julio

    2012-04-01

    We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.

  5. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  6. Simple graphene chemiresistors as pH sensors: fabrication and characterization

    NASA Astrophysics Data System (ADS)

    Lei, Nan; Li, Pengfei; Xue, Wei; Xu, Jie

    2011-10-01

    We report the fabrication and characterization of a simple gate-free graphene device as a pH sensor. The graphene sheets are made by mechanical exfoliation. Platinum contact electrodes are fabricated with a mask-free process using a focused ion beam and then expanded by silver paint. Annealing is used to improve the electrical contact. The experiment on the fabricated graphene device shows that the resistance of the device decreases linearly with increasing pH values (in the range of 4-10) in the surrounding liquid environment. The resolution achieved in our experiments is approximately 0.3 pH in alkali environment. The sensitivity of the device is calculated as approximately 2 kΩ pH-1. The simple configuration, miniaturized size and integration ability make graphene-based sensors promising candidates for future micro/nano applications.

  7. PROPOSAL FOR A SIMPLE AND EFFICIENT MONTHLY QUALITY MANAGEMENT PROGRAM ASSESSING THE CONSISTENCY OF ROBOTIC IMAGE-GUIDED SMALL ANIMAL RADIATION SYSTEMS

    PubMed Central

    Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.

    2015-01-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981

  8. Proposal for a Simple and Efficient Monthly Quality Management Program Assessing the Consistency of Robotic Image-Guided Small Animal Radiation Systems.

    PubMed

    Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A

    2015-11-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.

  9. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  10. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  11. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  12. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  13. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  14. The optical and structural properties of graphene nanosheets and tin oxide nanocrystals composite

    NASA Astrophysics Data System (ADS)

    Farheen, Parveen, Azra; Azam, Ameer

    2018-05-01

    A nanocomposite material consisting of metal oxide and reduced graphene oxide was prepared via simple, economic, and effective chemical reduction method. The synthesis strategy was based on the reduction of GO with Sn2+ ion that combines tin oxidation and GO reduction in one step, which provides a simple, low-cost and effective way to prepare graphene nanosheets/SnO2 nanocrystals composites because no additional chemicals were needed. SEM and TEM images shows the uniform distribution of the SnO2 nanocrystals on the Graphene nanosheets (GNs) surface and transmission electron microscope shows an average particle size of 2-4 nm. The mean crystallite size was calculated by Debye Scherrer formula and was found to be about 4.0 nm. Optical analysis was done by using UV-Visible spectroscopy technique and the band gap energy of the GNs/SnO2 nanocomposite was calculated by Tauc relation and came out to be 3.43eV.

  15. Decision support system of e-book provider selection for library using Simple Additive Weighting

    NASA Astrophysics Data System (ADS)

    Ciptayani, P. I.; Dewi, K. C.

    2018-01-01

    Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.

  16. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  17. Simplified analysis about horizontal displacement of deep soil under tunnel excavation

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin

    2017-11-01

    Most of the domestic scholars focus on the study about the law of the soil settlement caused by subway tunnel excavation, however, studies on the law of horizontal displacement are lacking. And it is difficult to obtain the horizontal displacement data of any depth in the project. At present, there are many formulas for calculating the settlement of soil layers. In terms of integral solutions of Mindlin classic elastic theory, stochastic medium theory, source-sink theory, the Peck empirical formula is relatively simple, and also has a strong applicability at home. Considering the incompressibility of rock and soil mass, based on the principle of plane strain, the calculation formula of the horizontal displacement of the soil along the cross section of the tunnel was derived by using the Peck settlement formula. The applicability of the formula is verified by comparing with the existing engineering cases, a simple and rapid analytical method for predicting the horizontal displacement is presented.

  18. A simple node and conductor data generator for SINDA

    NASA Technical Reports Server (NTRS)

    Gottula, Ronald R.

    1992-01-01

    This paper presents a simple, automated method to generate NODE and CONDUCTOR DATA for thermal match modes. The method uses personal computer spreadsheets to create SINDA inputs. It was developed in order to make SINDA modeling less time consuming and serves as an alternative to graphical methods. Anyone having some experience using a personal computer can easily implement this process. The user develops spreadsheets to automatically calculate capacitances and conductances based on material properties and dimensional data. The necessary node and conductor information is then taken from the spreadsheets and automatically arranged into the proper format, ready for insertion directly into the SINDA model. This technique provides a number of benefits to the SINDA user such as a reduction in the number of hand calculations, and an ability to very quickly generate a parametric set of NODE and CONDUCTOR DATA blocks. It also provides advantages over graphical thermal modeling systems by retaining the analyst's complete visibility into the thermal network, and by permitting user comments anywhere within the DATA blocks.

  19. Extending the excluded volume for percolation threshold estimates in polydisperse systems: The binary disk system

    DOE PAGES

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...

    2017-06-01

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  20. Calculation of stochastic broadening due to low mn magnetic perturbation in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  1. Hardness of H13 Tool Steel After Non-isothermal Tempering

    NASA Astrophysics Data System (ADS)

    Nelson, E.; Kohli, A.; Poirier, D. R.

    2018-04-01

    A direct method to calculate the tempering response of a tool steel (H13) that exhibits secondary hardening is presented. Based on the traditional method of presenting tempering response in terms of isothermal tempering, we show that the tempering response for a steel undergoing a non-isothermal tempering schedule can be predicted. Experiments comprised (1) isothermal tempering, (2) non-isothermal tempering pertaining to a relatively slow heating to process-temperature and (3) fast-heating cycles that are relevant to tempering by induction heating. After establishing the tempering response of the steel under simple isothermal conditions, the tempering response can be applied to non-isothermal tempering by using a numerical method to calculate the tempering parameter. Calculated results are verified by the experiments.

  2. A simple performance calculation method for LH2/LOX engines with different power cycles

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1973-01-01

    A simple method for the calculation of the specific impulse of an engine with a gas generator cycle is presented. The solution is obtained by a power balance between turbine and pump. Approximate equations for the performance of the combustion products of LH2/LOX are derived. Performance results are compared with solutions of different engine types.

  3. Adding glycaemic index and glycaemic load functionality to DietPLUS, a Malaysian food composition database and diet intake calculator.

    PubMed

    Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah

    2012-01-01

    This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Low cost estimation of the contribution of post-CCSD excitations to the total atomization energy using density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Sánchez, H. R.; Pis Diez, R.

    2016-04-01

    Based on the Aλ diagnostic for multireference effects recently proposed [U.R. Fogueri, S. Kozuch, A. Karton, J.M. Martin, Theor. Chem. Acc. 132 (2013) 1], a simple method for improving total atomization energies and reaction energies calculated at the CCSD level of theory is proposed. The method requires a CCSD calculation and two additional density functional theory calculations for the molecule. Two sets containing 139 and 51 molecules are used as training and validation sets, respectively, for total atomization energies. An appreciable decrease in the mean absolute error from 7-10 kcal mol-1 for CCSD to about 2 kcal mol-1 for the present method is observed. The present method provides atomization energies and reaction energies that compare favorably with relatively recent scaled CCSD methods.

  6. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  7. Orientational glasses. II. Calculation of critical thresholds in ACNxMn{1-x} mixtures

    NASA Astrophysics Data System (ADS)

    Galam, Serge; Depondt, Philippe

    1992-10-01

    Using a simple steric hindrance based idea, critical thresholds which occur in the phase diagram of ACNxMn{1-x} mixtures, where A stands for K, Na or Rb while Mn represents Br, Cl or I, are calculated. The cyanide density x is divided into a free-to-reorient part x_r, and a frozen-in part x_f. The latter term x_f is calculated from microscopic characteristics of the molecules involved. Two critical thresholds x_c and x_d for the disappearance of respectively, ferroelastic transitions and ferroelastic domains are obtained. The calculated values are in excellent agreement with available experimental results. Predictions are made for additionnal mixtures. Une idée simple d'encombrement stérique permet de calculer des seuils critiques qui apparaissent dans le diagramme de phase de mélanges ACNxMn{1-x}, où A représente K, Na ou Rb, et Mn, des atoms du type Br, Cl ou I. La concentration x du cyanure est divisée en une partie x_r de molécules libres de se réorienter, et une partie de molécules gelées x_f. Ce dernier terme x_f est calculé à partir des caractéristiques microscopiques des molécules concernées. Deux seuils critiques x_c et x_d pour la disparition respectivement des transitions et des domaines ferroelastiques sont obtenus. Les valeurs calculées sont en excellent accord avec les résultats expérimentaux disponibles. Des prédictions sont faites pour d'autres mélanges.

  8. Exciton Absorption Spectra by Linear Response Methods:Application to Conjugated Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosquera, Martin A.; Jackson, Nicholas E.; Fauvell, Thomas J.

    The theoretical description of the timeevolution of excitons requires, as an initial step, the calculation of their spectra, which has been inaccessible to most users due to the high computational scaling of conventional algorithms and accuracy issues caused by common density functionals. Previously (J. Chem. Phys. 2016, 144, 204105), we developed a simple method that resolves these issues. Our scheme is based on a two-step calculation in which a linear-response TDDFT calculation is used to generate orbitals perturbed by the excitonic state, and then a second linear-response TDDFT calculation is used to determine the spectrum of excitations relative to themore » excitonic state. Herein, we apply this theory to study near-infrared absorption spectra of excitons in oligomers of the ubiquitous conjugated polymers poly(3-hexylthiophene) (P3HT), poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV), and poly(benzodithiophene-thieno[3,4-b]thiophene) (PTB7). For P3HT and MEH-PPV oligomers, the calculated intense absorption bands converge at the longest wavelengths for 10 monomer units, and show strong consistency with experimental measurements. The calculations confirm that the exciton spectral features in MEH-PPV overlap with those of the bipolaron formation. In addition, our calculations identify the exciton absorption bands in transient absorption spectra measured by our group for oligomers (1, 2, and 3 units) of PTB7. For all of the cases studied, we report the dominant orbital excitations contributing to the optically active excited state-excited state transitions, and suggest a simple rule to identify absorption peaks at the longest wavelengths. We suggest our methodology could be considered for further evelopments in theoretical transient spectroscopy to include nonadiabatic effects, coherences, and to describe the formation of species such as charge-transfer states and polaron pairs.« less

  9. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  10. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  11. Development and evaluation of a novel smart device-based application for burn assessment and management.

    PubMed

    Godwin, Zachary; Tan, James; Bockhold, Jennifer; Ma, Jason; Tran, Nam K

    2015-06-01

    We have developed a novel software application that provides a simple and interactive Lund-Browder diagram for automatic calculation of total body surface area (TBSA) burned, fluid formula recommendations, and serial wound photography on a smart device platform. The software was developed for the iPad (Apple, Cupertino, CA) smart device platforms. Ten burns ranging from 5 to 95% TBSA were computer generated on a patient care simulator using Adobe Photoshop CS6 (Adobe, San Jose, CA). Burn clinicians calculated the TBSA first using a paper-based Lund-Browder diagram. Following a one-week "washout period", the same clinicians calculated TBSA using the smart device application. Simulated burns were presented in a random fashion and clinicians were timed. Percent TBSA burned calculated by Peregrine vs. the paper-based Lund-Browder were similar (29.53 [25.57] vs. 28.99 [25.01], p=0.22, n=7). On average, Peregrine allowed users to calculate burn size significantly faster than the paper form (58.18 [31.46] vs. 90.22 [60.60]s, p<0.001, n=7). The smart device application also provided 5 megapixel photography capabilities, and acute burn resuscitation fluid calculator. We developed an innovative smart device application that enables accurate and rapid burn size assessment to be cost-effective and widely accessible. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  12. Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis

    NASA Astrophysics Data System (ADS)

    Mills, D. A.

    2017-10-01

    In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.

  13. Diffuse sorption modeling.

    PubMed

    Pivovarov, Sergey

    2009-04-01

    This work presents a simple solution for the diffuse double layer model, applicable to calculation of surface speciation as well as to simulation of ionic adsorption within the diffuse layer of solution in arbitrary salt media. Based on Poisson-Boltzmann equation, the Gaines-Thomas selectivity coefficient for uni-bivalent exchange on clay, K(GT)(Me(2+)/M(+))=(Q(Me)(0.5)/Q(M)){M(+)}/{Me(2+)}(0.5), (Q is the equivalent fraction of cation in the exchange capacity, and {M(+)} and {Me(2+)} are the ionic activities in solution) may be calculated as [surface charge, mueq/m(2)]/0.61. The obtained solution of the Poisson-Boltzmann equation was applied to calculation of ionic exchange on clays and to simulation of the surface charge of ferrihydrite in 0.01-6 M NaCl solutions. In addition, a new model of acid-base properties was developed. This model is based on assumption that the net proton charge is not located on the mathematical surface plane but diffusely distributed within the subsurface layer of the lattice. It is shown that the obtained solution of the Poisson-Boltzmann equation makes such calculations possible, and that this approach is more efficient than the original diffuse double layer model.

  14. Simulating Freshwater Availability under Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Zeng, N.; Motesharrei, S.; Gustafson, K. C.; Rivas, J.; Miralles-Wilhelm, F.; Kalnay, E.

    2013-12-01

    Freshwater availability is a key factor for regional development. Precipitation, evaporation, river inflow and outflow are the major terms in the estimate of regional water supply. In this study, we aim to obtain a realistic estimate for these variables from 1901 to 2100. First we calculated the ensemble mean precipitation using the 2011-2100 RCP4.5 output (re-sampled to half-degree spatial resolution) from 16 General Circulation Models (GCMs) participating the Coupled Model Intercomparison Project Phase 5 (CMIP5). The projections are then combined with the half-degree 1901-2010 Climate Research Unit (CRU) TS3.2 dataset after bias correction. We then used the combined data to drive our UMD Earth System Model (ESM), in order to generate evaporation and runoff. We also developed a River-Routing Scheme based on the idea of Taikan Oki, as part of the ESM. It is capable of calculating river inflow and outflow for any region, driven by the gridded runoff output. River direction and slope information from Global Dominant River Tracing (DRT) dataset are included in our scheme. The effects of reservoirs/dams are parameterized based on a few simple factors such as soil moisture, population density and geographic regions. Simulated river flow is validated with river gauge measurements for the world's major rivers. We have applied our river flow calculation to two data-rich watersheds in the United States: Phoenix AMA watershed and the Potomac River Basin. The results are used in our SImple WAter model (SIWA) to explore water management options.

  15. Simple approximation of total emissivity of CO2-H2O mixture used in the zonal method of calculation of heat transfer by radiation

    NASA Astrophysics Data System (ADS)

    Lisienko, V. G.; Malikov, G. K.; Titaev, A. A.

    2014-12-01

    The paper presents a new simple-to-use expression to calculate the total emissivity of a mixture of gases CO2 and H2O used for modeling heat transfer by radiation in industrial furnaces. The accuracy of this expression is evaluated using the exponential wide band model. It is found that the time taken to calculate the total emissivity in this expression is 1.5 times less than in other approximation methods.

  16. New approach to analyzing soil-building systems

    USGS Publications Warehouse

    Safak, E.

    1998-01-01

    A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.

  17. Pediatric siMS score: A new, simple and accurate continuous metabolic syndrome score for everyday use in pediatrics.

    PubMed

    Vukovic, Rade; Milenkovic, Tatjana; Stojan, George; Vukovic, Ana; Mitrovic, Katarina; Todorovic, Sladjana; Soldatovic, Ivan

    2017-01-01

    The dichotomous nature of the current definition of metabolic syndrome (MS) in youth results in loss of information. On the other hand, the calculation of continuous MS scores using standardized residuals in linear regression (Z scores) or factor scores of principal component analysis (PCA) is highly impractical for clinical use. Recently, a novel, easily calculated continuous MS score called siMS score was developed based on the IDF MS criteria for the adult population. To develop a Pediatric siMS score (PsiMS), a modified continuous MS score for use in the obese youth, based on the original siMS score, while keeping the score as simple as possible and retaining high correlation with more complex scores. The database consisted of clinical data on 153 obese (BMI ≥95th percentile) children and adolescents. Continuous MS scores were calculated using Z scores and PCA, as well as the original siMS score. Four variants of PsiMS score were developed in accordance with IDF criteria for MS in youth and correlation of these scores with PCA and Z score derived MS continuous scores was assessed. PsiMS score calculated using formula: (2xWaist/Height) + (Glucose(mmol/l)/5.6) + (triglycerides(mmol/l)/1.7) + (Systolic BP/130)-(HDL(mmol/l)/1.02) showed the highest correlation with most of the complex continuous scores (0.792-0.901). The original siMS score also showed high correlation with continuous MS scores. PsiMS score represents a practical and accurate score for the evaluation of MS in the obese youth. The original siMS score should be used when evaluating large cohorts consisting of both adults and children.

  18. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  19. Taboo Search: An Approach to the Multiple Minima Problem

    NASA Astrophysics Data System (ADS)

    Cvijovic, Djurdje; Klinowski, Jacek

    1995-02-01

    Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.

  20. Infrared zone-scanning system.

    PubMed

    Belousov, Aleksandr; Popov, Gennady

    2006-03-20

    Challenges encountered in designing an infrared viewing optical system that uses a small linear detector array based on a zone-scanning approach are discussed. Scanning is performed by a rotating refractive polygon prism with tilted facets, which, along with high-speed line scanning, makes the scanning gear as simple as possible. A method of calculation of a practical optical system to compensate for aberrations during prism rotation is described.

  1. Simple microfluidic stagnation point flow geometries

    PubMed Central

    Dockx, Greet; Verwijlen, Tom; Sempels, Wouter; Nagel, Mathias; Moldenaers, Paula; Hofkens, Johan; Vermant, Jan

    2016-01-01

    A geometrically simple flow cell is proposed to generate different types of stagnation flows, using a separation flow and small variations of the geometric parameters. Flows with high local deformation rates can be changed from purely rotational, over simple shear flow, to extensional flow in a region surrounding a stagnation point. Computational fluid dynamic calculations are used to analyse how variations of the geometrical parameters affect the flow field. These numerical calculations are compared to the experimentally obtained streamlines of different designs, which have been determined by high speed confocal microscopy. As the flow type is dictated predominantly by the geometrical parameters, such simple separating flow devices may alleviate the requirements for flow control, while offering good stability for a wide variety of flow types. PMID:27462382

  2. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  3. Sulfanilic acid-modified chitosan mini-spheres and their application for lysozyme purification from egg white.

    PubMed

    Hirsch, Daniela B; Baieli, María F; Urtasun, Nicolás; Lázaro-Martínez, Juan M; Glisoni, Romina J; Miranda, María V; Cascone, Osvaldo; Wolman, Federico J

    2018-03-01

    A cation exchange matrix with zwitterionic and multimodal properties was synthesized by a simple reaction sequence coupling sulfanilic acid to a chitosan based support. The novel chromatographic matrix was physico-chemically characterized by ss-NMR and ζ potential, and its chromatographic performance was evaluated for lysozyme purification from diluted egg white. The maximum adsorption capacity, calculated according to Langmuir adsorption isotherm, was 50.07 ± 1.47 mg g -1 while the dissociation constant was 0.074 ± 0.012 mg mL -1 . The process for lysozyme purification from egg white was optimized, with 81.9% yield and a purity degree of 86.5%, according to RP-HPLC analysis. This work shows novel possible applications of chitosan based materials. The simple synthesis reactions combined with the simple mode of use of the chitosan matrix represents a novel method to purify proteins from raw starting materials. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:387-396, 2018. © 2017 American Institute of Chemical Engineers.

  4. Numerical calculation of protein-ligand binding rates through solution of the Smoluchowski equation using smooth particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.

    2015-12-01

    We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated atmore » various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less

  5. Convective Dynamics and Disequilibrium Chemistry in the Atmospheres of Giant Planets and Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2018-02-01

    Disequilibrium chemical processes significantly affect the spectra of substellar objects. To study these effects, dynamical disequilibrium has been parameterized using the quench and eddy diffusion approximations, but little work has been done to explore how these approximations perform under realistic planetary conditions in different dynamical regimes. As a first step toward addressing this problem, we study the localized, small-scale convective dynamics of planetary atmospheres by direct numerical simulation of fully compressible hydrodynamics with reactive tracers using the Dedalus code. Using polytropically stratified, plane-parallel atmospheres in 2D and 3D, we explore the quenching behavior of different abstract chemical species as a function of the dynamical conditions of the atmosphere as parameterized by the Rayleigh number. We find that in both 2D and 3D, chemical species quench deeper than would be predicted based on simple mixing-length arguments. Instead, it is necessary to employ length scales based on the chemical equilibrium profile of the reacting species in order to predict quench points and perform chemical kinetics modeling in 1D. Based on the results of our simulations, we provide a new length scale, derived from the chemical scale height, that can be used to perform these calculations. This length scale is simple to calculate from known chemical data and makes reasonable predictions for our dynamical simulations.

  6. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  7. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE PAGES

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas; ...

    2014-10-31

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  8. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  9. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  10. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  11. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  12. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  13. Feasibility study on the verification of actual beam delivery in a treatment room using EPID transit dosimetry.

    PubMed

    Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun

    2014-12-04

    The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.

  14. Numerical study of centrifugal compressor stage vaneless diffusers

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Soldatova, K.; Solovieva, O.

    2015-08-01

    The authors analyzed CFD calculations of flow in vaneless diffusers with relative width in range from 0.014 to 0.100 at inlet flow angles in range from 100 to 450 with different inlet velocity coefficients, Reynolds numbers and surface roughness. The aim is to simulate calculated performances by simple algebraic equations. The friction coefficient that represents head losses as friction losses is proposed for simulation. The friction coefficient and loss coefficient are directly connected by simple equation. The advantage is that friction coefficient changes comparatively little in range of studied parameters. Simple equations for this coefficient are proposed by the authors. The simulation accuracy is sufficient for practical calculations. To create the complete algebraic model of the vaneless diffuser the authors plan to widen this method of modeling to diffusers with different relative length and for wider range of Reynolds numbers.

  15. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  16. Simulation of upwind maneuvering of a sailing yacht

    NASA Astrophysics Data System (ADS)

    Harris, Daniel Hartrick

    A time domain maneuvering simulation of an IACC class yacht suitable for the analysis of unsteady upwind sailing including tacking is presented. The simulation considers motions in six degrees of freedom. The hydrodynamic and aerodynamic loads are calculated primarily with unsteady potential theory supplemented by empirical viscous models. The hydrodynamic model includes the effects of incident waves. Control of the rudder is provided by a simple rate feedback autopilot which is augmented with open loop additions to mimic human steering. The hydrodynamic models are based on the superposition of force components. These components fall into two groups, those which the yacht will experience in calm water, and those due to incident waves. The calm water loads are further divided into zero Froude number, or "double body" maneuvering loads, hydrostatic loads, gravitational loads, free surface radiation loads, and viscous/residual loads. The maneuvering loads are calculated with an unsteady panel code which treats the instantaneous geometry of the yacht below the undisturbed free surface. The free surface radiation loads are calculated via convolution of impulse response functions derived from seakeeping strip theory. The viscous/residual loads are based upon empirical estimates. The aerodynamic model consists primarily of a database of steady state sail coefficients. These coefficients treat the individual contributions to the total sail force of a number of chordwise strips on both the main and jib. Dynamic effects are modeled by using the instantaneous incident wind velocity and direction as the independent variables for the sail load contribution of each strip. The sail coefficient database was calculated numerically with potential methods and simple empirical viscous corrections. Additional aerodynamic load calculations are made to determine the parasitic contributions of the rig and hull. Validation studies compare the steady sailing hydro and aerodynamic loads, seaway induced motions, added resistance in waves, and tacking performance with trials data and other sources. Reasonable agreement is found in all cases.

  17. Mutual Information Rate and Bounds for It

    PubMed Central

    Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso

    2012-01-01

    The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809

  18. Electronic, elastic and optical properties of divalent (R+2X) and trivalent (R+3X) rare earth monochalcogenides

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Chandra, S.; Singh, J. K.

    2017-08-01

    Based on plasma oscillations theory of solids, simple relations have been proposed for the calculation of bond length, specific gravity, homopolar energy gap, heteropolar energy gap, average energy gap, crystal ionicity, bulk modulus, electronic polarizability and dielectric constant of rare earth divalent R+2X and trivalent R+3X monochalcogenides. The specific gravity of nine R+2X, twenty R+3X, and bulk modulus of twenty R+3X monochalcogenides have been calculated for the first time. The calculated values of all parameters are compared with the available experimental and the reported values. A fairly good agreement has been obtained between them. The average percentage deviation of two parameters: bulk modulus and electronic polarizability for which experimental data are known, have also been calculated and found to be better than the earlier correlations.

  19. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D. Mark

    Here, three polymers are routinely used as binders for plastic bonded explosives by Lawrence Livermore National Laboratory, FK-800, Viton A 100, and Oxy 461. Attenuated total reflectance Fourier transform infrared measurements were performed on 10 different lots of FK-800, 5 different lots of Oxy 461, and 3 different lots of Viton A-100, one sample of Viton VTR 5883 and 2 Fluorel polymers of hexafluoropropene and vinylidene fluoride. The characteristic IR bands were measured. If possible, their vibrational modes were assigned based on literature data. Simple Mopac calculations were used to validate these vibrational mode assignments. Somewhat more sophisticated calculations weremore » run using Gaussian on the same structures.« less

  1. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  2. Complementarity and Young's interference fringes from two atoms

    NASA Astrophysics Data System (ADS)

    Itano, W. M.; Bergquist, J. C.; Bollinger, J. J.; Wineland, D. J.; Eichmann, U.; Raizen, M. G.

    1998-06-01

    The interference pattern of the resonance fluorescence from a J=1/2 to J=1/2 transition of two identical atoms confined in a three-dimensional harmonic potential is calculated. The thermal motion of the atoms is included. Agreement is obtained with experiments [U. Eichmann et al., Phys. Rev. Lett. 70, 2359 (1993)]. Contrary to some theoretical predictions, but in agreement with the present calculations, a fringe visibility greater than 50% can be observed with polarization-selective detection. The dependence of the fringe visibility on polarization has a simple interpretation, based on whether or not it is possible in principle to determine which atom emitted the photon.

  3. Instilling exploitable INHIBIT logic gate response for F-/H+ in 'end-off' anthracene-diamine hybrid by simple functional group manipulation: Experimental study aided by DFT calculations

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Arghyadeep; Makhal, Subhash Chandra; Ganguly, Aniruddha; Guchhait, Nikhil

    2018-03-01

    Two anthracene based receptors ADAMN and ANOPD were synthesized and characterized. The response of both towards F- ion has been monitored by UV-Vis and 1H NMR spectroscopy as well as naked eye color change. Interestingly, change in acceptor unit endows ADAMN to behave as a INHIBIT logic gate with F- and H+ as inputs whereas ANOPD remains totally silent towards F-. The reason for this differential behavior has been explored by DFT calculations. The practical utility of the logic gate response of ADAMN was explored by successful paper strip experiment.

  4. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems.

    PubMed

    Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P

    1999-10-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.

  5. Calculation of two dimensional vortex/surface interference using panel methods

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1980-01-01

    The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.

  6. Collisional Shift and Broadening of Iodine Spectral Lines in Air Near 543 nm

    NASA Technical Reports Server (NTRS)

    Fletcher, D. G.; McDaniel, J. C.

    1995-01-01

    The collisional processes that influence the absorption of monochromatic light by iodine in air have been investigated. Measurements were made in both a static cell and an underexpanded jet flow over the range of properties encountered in typical compressible-flow aerodynamic applications. Experimentally measured values of the collisional shift and broadening coefficients were 0.058 +/- 0.004 and 0.53 +/- 0.010 GHz K(exp 0.7)/torr, respectively. The measured shift value showed reasonable agreement with theoretical calculations based on Lindholm-Foley collisional theory for a simple dispersive potential. The measured collisional broadening showed less favorable agreement with the calculated value.

  7. Simple versus composite indicators of socioeconomic status in resource allocation formulae: the case of the district resource allocation formula in Malawi

    PubMed Central

    2010-01-01

    Background The district resource allocation formula in Malawi was recently reviewed to include stunting as a proxy measure of socioeconomic status. In many countries where the concept of need has been incorporated in resource allocation, composite indicators of socioeconomic status have been used. In the Malawi case, it is important to ascertain whether there are differences between using single variable or composite indicators of socioeconomic status in allocations made to districts, holding all other factors in the resource allocation formula constant. Methods Principal components analysis was used to calculate asset indices for all districts from variables that capture living standards using data from the Malawi Multiple Indicator Cluster Survey 2006. These were normalized and used to weight district populations. District proportions of national population weighted by both the simple and composite indicators were then calculated for all districts and compared. District allocations were also calculated using the two approaches and compared. Results The two types of indicators are highly correlated, with a spearman rank correlation coefficient of 0.97 at the 1% level of significance. For 21 out of the 26 districts included in the study, proportions of national population weighted by the simple indicator are higher by an average of 0.6 percentage points. For the remaining 5 districts, district proportions of national population weighted by the composite indicator are higher by an average of 2 percentage points. Though the average percentage point differences are low and the actual allocations using both approaches highly correlated (ρ of 0.96), differences in actual allocations exceed 10% for 8 districts and have an average of 4.2% for the remaining 17. For 21 districts allocations based on the single variable indicator are higher. Conclusions Variations in district allocations made using either the simple or composite indicators of socioeconomic status are not statistically different to recommend one over the other. However, the single variable indicator is favourable for its ease of computation. PMID:20053274

  8. Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition

    NASA Astrophysics Data System (ADS)

    McGilvray, M.; Dann, A. G.; Jacobs, P. A.

    2013-07-01

    Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.

  9. Ab-initio calculation of EuO doped with 5% of (Ti, V, Cr and Fe): GGA and SIC approximation

    NASA Astrophysics Data System (ADS)

    Rouchdi, M.; Salmani, E.; Bekkioui, N.; Ez-Zahraouy, H.; Hassanain, N.; Benyoussef, A.; Mzerd, A.

    2017-12-01

    In this research, a simple theoretical method is proposed to investigate the electronic, magnetic and optical properties of Europium oxide (EuO) doped with 5% of (Ti, V, Cr and Fe). For a basic understanding of these properties, we employed Density-Functional Theory (DFT) based calculations with the Korringa-Kohn-Rostoker code (KKR) combined with the Coherent Potential Approximation (CPA). Also we investigated the half-metallic ferromagnetic behavior of EuO doped with 5% of (Ti, V, Cr and Fe) within the self-interaction-corrected Generalized Gradient Approximation (GGA-SIC). Our calculated results revealed that the Eu0.95TM0.05O is ferromagnetic with a high transition temperature. Moreover, the optical absorption spectra revealed that the half metallicity has been also predicted.

  10. A fast, low resistance switch for small slapper detonators

    NASA Astrophysics Data System (ADS)

    Richardson, D. D.; Jones, D. A.

    1986-10-01

    A novel design for a shock compression conduction switch for use with slapper detonators is described. The switch is based on the concept of an explosively driven flyer plate impacting a plastic insulator and producing sufficient pressure within the insulator to produce a conduction transition. An analysis of the functioning of the switch is made using a simple Gurney model for the explosive, and basic shock wave theory to calculate impact pressure and switch closure times. The effect of explosive tamping is considered, and calculations are carried out for two donor explosive thicknesses and a range of flyer plate thicknesses. The new switch has been successfully tested in a series of experimental slapper detonator firings. The results of these tests show trends in overall agreement with those predicted by the calculations.

  11. Rocket exhaust ground cloud/atmospheric interactions

    NASA Technical Reports Server (NTRS)

    Hwang, B.; Gould, R. K.

    1978-01-01

    An attempt to identify and minimize the uncertainties and potential inaccuracies of the NASA Multilayer Diffusion Model (MDM) is performed using data from selected Titan 3 launches. The study is based on detailed parametric calculations using the MDM code and a comparative study of several other diffusion models, the NASA measurements, and the MDM. The results are discussed and evaluated. In addition, the physical/chemical processes taking place during the rocket cloud rise are analyzed. The exhaust properties and the deluge water effects are evaluated. A time-dependent model for two aerosol coagulations is developed and documented. Calculations using this model for dry deposition during cloud rise are made. A simple model for calculating physical properties such as temperature and air mass entrainment during cloud rise is also developed and incorporated with the aerosol model.

  12. Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train

    NASA Astrophysics Data System (ADS)

    Sytov, E. S.; Bratus, A. S.; Yurchenko, D.

    2018-04-01

    This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.

  13. Coupled electromagnetic-thermodynamic simulations of microwave heating problems using the FDTD algorithm.

    PubMed

    Kopyt, Paweł; Celuch, Małgorzata

    2007-01-01

    A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.

  14. Prediction of surface tension of HFD-like fluids using the Fowler’s approximation

    NASA Astrophysics Data System (ADS)

    Goharshadi, Elaheh K.; Abbaspour, Mohsen

    2006-09-01

    The Fowler's expression for calculation of the reduced surface tension has been used for simple fluids using the Hartree-Fock Dispersion (HFD)-like potential (HFD-like fluids) obtained from the inversion of the viscosity collision integrals at zero pressure. In order to obtain the RDFs values needed for calculation of the surface tension, we have performed the MD simulation at different temperatures and densities and then fitted with an expression and compared the resulting RDFs with the experiment. Our results are in excellent accordance with experimental values when the vapor density has been considered, especially at high temperatures. We have also calculated the surface tension using a RDF's expression based on the Lennard-Jones (LJ) potential which was in good agreement with the molecular dynamics simulations. In this work, we have shown that our results based on HFD-like potential can describe the temperature dependence of the surface tension superior than that of LJ potential.

  15. Estimation of Critical Gap Based on Raff's Definition

    PubMed Central

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160

  16. Estimation of critical gap based on Raff's definition.

    PubMed

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.

  17. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. WebScope: A New Tool for Fusion Data Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Yang, Fei; Dang, Ningning; Xiao, Bingjia

    2010-04-01

    A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.

  19. Genetic Algorithms and Their Application to the Protein Folding Problem

    DTIC Science & Technology

    1993-12-01

    and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This

  20. Application of numerical simulation on optimum design of two-dimensional sedimentation tanks in the wastewater treatment plant.

    PubMed

    Zeng, Guang-Ming; Zhang, Shuo-Fu; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing

    2003-05-01

    The paper establishes the relationship between the settling efficiency and the sizes of the sedimentation tank through the process of numerical simulation, which is taken as one of the constraints to set up a simple optimum designing model of sedimentation tank. The feasibility and advantages of this model based on numerical calculation are verified through the application of practical case.

  1. CENTERA: A Centralized Trust-Based Efficient Routing Protocol with Authentication for Wireless Sensor Networks †

    PubMed Central

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-01-01

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of “bad” nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics—maliciousness, cooperation, and compatibility—and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates “bad”, “misbehaving” or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated “bad” behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to “good” nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations. PMID:25648712

  2. CENTERA: a centralized trust-based efficient routing protocol with authentication for wireless sensor networks.

    PubMed

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-02-02

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of "bad" nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics-maliciousness, cooperation, and compatibility-and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates "bad", "misbehaving" or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated "bad" behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to "good" nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations.

  3. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  4. Safe and simple detection of sparse hydrogen by Pd-Au alloy/air based 1D photonic crystal sensor

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Biswas, T.; Chattopadhyay, R.; Ghosh, J.; Bysakh, S.; Bhadra, S. K.

    2016-11-01

    A simple integrated hydrogen sensor using Pd-Au alloy/air based one dimensional photonic crystal with an air defect layer is theoretically modeled. Structural parameters of the photonic crystal are delicately scaled to generate photonic band gap frequencies in a visible spectral regime. An optimized defect thickness permits a localized defect mode operating at a frequency within the photonic band gap region. Hydrogen absorption causes modification in the band gap characteristics due to variation of refractive index and lattice parameters of the alloy. As a result, the transmission peak appeared due to the resonant defect state gets shifted. This peak shifting is utilized to detect sparse amount of hydrogen present in the surrounding environment. A theoretical framework is built to calculate the refractive index profile of hydrogen loaded alloy using density functional theory and Bruggeman's effective medium approximation. The calculated refractive index variation of Pd3Au alloy film due to hydrogen loading is verified experimentally by measuring the reflectance characteristics. Lattice expansion properties of the alloy are studied through X-ray diffraction analyses. The proposed structure shows about 3 nm red shift of the transmission peak for a rise of 1% atomic hydrogen concentration in the alloy.

  5. Validation of Simple Quantification Methods for (18)F-FP-CIT PET Using Automatic Delineation of Volumes of Interest Based on Statistical Probabilistic Anatomical Mapping and Isocontour Margin Setting.

    PubMed

    Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo

    2012-12-01

    (18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P < 0.001). The agreements of QUVP or QSPAM with visual grading were slight to fair for the caudate nucleus (κ = 0.421 and 0.291, respectively) and good to perfect to the putamen (κ = 0.663 and 0.607, respectively). Also, QSPAM and QUVP had a significant correlation with each other (P < 0.001). Cerebral atrophy made a significant difference in QSPAM and QUVP of the caudate nuclei regions with decreased (18)F-FP-CIT uptake. Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.

  6. Universal binding energy relations in metallic adhesion

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Smith, J. R.; Rose, J. J.

    1984-01-01

    Rose, Smith, and Ferrante have discovered scaling relations which map the adhesive binding energy calculated by Ferrante and Smith onto a single universal binding energy curve. These binding energies are calculated for all combinations of Al(111), Zn(0001), Mg(0001), and Na(110) in contact. The scaling involves normalizing the energy by the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. Rose et al. have also found that the calculated cohesive energies of K, Ba, Cu, Mo, and Sm scale by similar simple relations, suggesting the universal relation may be more general than for the simple free electron metals for which it was derived. In addition, the scaling length was defined more generally in order to relate it to measurable physical properties. Further this universality can be extended to chemisorption. A simple and yet quite accurate prediction of a zero temperature equation of state (volume as a function of pressure for metals and alloys) is presented. Thermal expansion coefficients and melting temperatures are predicted by simple, analytic expressions, and results compare favorably with experiment for a broad range of metals.

  7. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  8. Phase properties of elastic waves in systems constituted of adsorbed diatomic molecules on the (001) surface of a simple cubic crystal

    NASA Astrophysics Data System (ADS)

    Deymier, P. A.; Runge, K.

    2018-03-01

    A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.

  9. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  10. An original approach was used to better evaluate the capacity of a prognostic marker using published survival curves.

    PubMed

    Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann

    2014-04-01

    Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Vibration based algorithm for crack detection in cantilever beam containing two different types of cracks

    NASA Astrophysics Data System (ADS)

    Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack

    2013-11-01

    In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.

  12. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  13. A study of stiffness, residual strength and fatigue life relationships for composite laminates

    NASA Technical Reports Server (NTRS)

    Ryder, J. T.; Crossman, F. W.

    1983-01-01

    Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.

  14. 76 FR 39242 - Federal Acquisition Regulation; TINA Interest Calculations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... pricing data. This rule replaces the term ``simple interest'' as the requirement for calculating interest...-AL73 Federal Acquisition Regulation; TINA Interest Calculations AGENCIES: Department of Defense (DoD... interest calculations be applied to Government overpayments as a result of defective cost or pricing data...

  15. Actin-based motility of Listeria: Right-handed helical trajectories

    NASA Astrophysics Data System (ADS)

    Rangarajan, Murali

    2012-06-01

    Bacteria such as Listeria monocytogenes recruit cellular machinery to move in and between cells. Understanding the mechanism of motility, including force and torque generation and the resultant displacements, holds keys to numerous applications in medicine and biosensing. In this work, a simple back-of-the-envelope calculation is presented to illustrate that a biomechanical model of actin-based motility of a rigid surface through persistently attached filaments propelled by affinity-modulated molecular motors can produce a right-handed helical trajectory consistent with experimental observations. The implications of the mechanism to bacterial motility are discussed.

  16. Better Than Counting: Density Profiles from Force Sampling

    NASA Astrophysics Data System (ADS)

    de las Heras, Daniel; Schmidt, Matthias

    2018-05-01

    Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.

  17. SOLAR OBLIQUITY INDUCED BY PLANET NINE: SIMPLE CALCULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Dong

    2016-12-01

    Bailey et al. and Gomes et al. recently suggested that the 6° misalignment between the Sun’s rotational equator and the orbital plane of the major planets may be produced by forcing from the hypothetical Planet Nine on an inclined orbit. Here, we present a simple yet accurate calculation of the effect, which provides a clear description of how the Sun’s spin orientation depends on the property of Planet Nine in this scenario.

  18. Parameterized cross sections for Coulomb dissociation in heavy-ion collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Cucinotta, F. A.; Townsend, L. W.; Badavi, F. F.

    1988-01-01

    Simple parameterizations of Coulomb dissociation cross sections for use in heavy-ion transport calculations are presented and compared to available experimental dissociation data. The agreement between calculation and experiment is satisfactory considering the simplicity of the calculations.

  19. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  20. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. The use of computed tomography for the estimation of DIEP flap weights in breast reconstruction: a simple mathematical formula.

    PubMed

    Nanidis, Theodore G; Ridha, Hyder; Jallali, Navid

    2014-10-01

    Estimation of the volume of abdominal tissue is desirable when planning autologous abdominal based breast reconstruction. However, this can be difficult clinically. The aim of this study was to develop a simple, yet reliable method of calculating the deep inferior epigastric artery perforator flap weight using the routine preoperative computed tomography angiogram (CTA) scan. Our mathematical formula is based on the shape of a DIEP flap resembling that of an isosceles triangular prism. Thus its volume can be calculated with a standard mathematical formula. Using bony landmarks three measurements were acquired from the CTA scan to calculate the flap weight. This was then compared to the actual flap weight harvested in both a retrospective feasibility and prospective study. In the retrospective group 17 DIEP flaps in 17 patients were analyzed. Average predicted flap weight was 667 g (range 293-1254). The average actual flap weight was 657 g (range 300-1290) giving an average percentage error of 6.8% (p-value for weight difference 0.53). In the prospective group 15 DIEP flaps in 15 patients were analyzed. Average predicted flap weight was 618 g (range 320-925). The average actual flap weight was 624 g (range 356-970) giving an average percentage error of 6.38% (p-value for weight difference 0.57). This formula is a quick, reliable and accurate way of estimating the volume of abdominal tissue using the preoperative CTA scan. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Validation of energy requirement equations for estimation of breast milk consumption in infants.

    PubMed

    Schoen, Stefanie; Sichert-Hellert, Wolfgang; Kersting, Mathilde

    2009-12-01

    To test equations for calculating infants' energy requirements as a simple and reliable instrument for estimating the amount of breast milk consumed in epidemiological studies where test-weighing is not possible. Infants' energy requirements were calculated using three different equations based on reference data and compared with actual energy intakes assessed using the 3 d weighed dietary records of breast-fed infants from the DOrtmund Nutritional and Anthropometric Longitudinally Designed (DONALD) Study. A sub-sample of 323 infants from the German DONALD Study who were predominantly breast-fed for at least the first four months of life, and who had 3 d weighed dietary records and repeated body weight measurements within the first year of life. Healthy, term infants breast-fed for at least 4 months, 0-12 months of age. The overall differences between measured energy intake and calculated energy requirements were quite small, never more than 10 % of total energy intake, and smaller than the mean variance of energy intake between the three days of recording. The equation of best fit incorporated body weight and recent growth, while the worst fit was found for the equation not considering body weight. Breast milk consumption in fully and partially breast-fed infants can be reasonably quantified by calculating the infants' individual energy requirements via simple equations. This provides a feasible approach for estimating infant energy intake in epidemiological studies where test-weighing of breast milk is not possible.

  3. A review of the calculation procedure for critical acid loads for terrestrial ecosystems.

    PubMed

    van der Salm, C; de Vries, W

    2001-04-23

    Target loads for acid deposition in the Netherlands, as formulated in the Dutch environmental policy plan, are based on critical load calculations at the end of the 1980s. Since then knowledge on the effect of acid deposition on terrestrial ecosystems has substantially increased. In the early 1990s a simple mass balance model was developed to calculate critical loads. This model was evaluated and the methods were adapted to represent the current knowledge. The main changes in the model are the use of actual empirical relationships between Al and H concentrations in the soil solution, the addition of a constant base saturation as a second criterion for soil quality and the use of tree species-dependant critical Al/base cation (BC) ratios for Dutch circumstances. The changes in the model parameterisation and in the Al/BC criteria led to considerably (50%) higher critical loads for root damage. The addition of a second criterion in the critical load calculations for soil quality caused a decrease in the critical loads for soils with a median to high base saturation such as loess and clay soils. The adaptation hardly effected the median critical load for soil quality in the Netherlands, since only 15% of the Dutch forests occur on these soils. On a regional scale, however, critical loads were (much) lower in areas where those soils are located.

  4. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  5. Room temperature current-voltage (I-V) characteristics of Ag/InGaN/n-Si Schottky barrier diode

    NASA Astrophysics Data System (ADS)

    Erdoğan, Erman; Kundakçı, Mutlu

    2017-02-01

    Metal-semiconductors (MSs) or Schottky barrier diodes (SBDs) have a significant potential in the integrated device technology. In the present paper, electrical characterization of Ag/InGaN/n-Si Schottky diode have been systematically carried out by simple Thermionic method (TE) and Norde function based on the I-V characteristics. Ag ohmic and schottky contacts are deposited on InGaN/n-Si film by thermal evaporation technique under a vacuum pressure of 1×10-5 mbar. Ideality factor, barrier height and series resistance values of this diode are determined from I-V curve. These parameters are calculated by TE and Norde methods and findings are given in a comparetive manner. The results show the consistency for both method and also good agreement with other results obtained in the literature. The value of ideality factor and barrier height have been determined to be 2.84 and 0.78 eV at room temperature using simple TE method. The value of barrier height obtained with Norde method is calculated as 0.79 eV.

  6. Nonthermal model for ultrafast laser-induced plasma generation around a plasmonic nanorod

    NASA Astrophysics Data System (ADS)

    Labouret, Timothée; Palpant, Bruno

    2016-12-01

    The excitation of plasmonic gold nanoparticles by ultrashort laser pulses can trigger interesting electron-based effects in biological media such as production of reactive oxygen species or cell membrane optoporation. In order to better understand the optical and thermal processes at play, we modeled the interaction of a subpicosecond, near-infrared laser pulse with a gold nanorod in water. A nonthermal model is used and compared to a simple two-temperature thermal approach. For both models, the computation of the transient optical response reveals strong plasmon damping. Electron emission from the metal into the water is also calculated in a specific way for each model. The dynamics of the resulting local plasma in water is assessed by a rate equation model. While both approaches provide similar results for the transient optical properties, the simple thermal one is unable to properly describe electron emission and plasma generation. The latter is shown to mostly originate from electron-electron thermionic emission and photoemission from the metal. Taking into account the transient optical response is mandatory to properly calculate both electron emission and local plasma dynamics in water.

  7. Coma dust scattering concepts applied to the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Fink, Uwe; Rinaldi, Giovanna

    2015-09-01

    This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.

  8. The application of muscle wrapping to voxel-based finite element models of skeletal structures.

    PubMed

    Liu, Jia; Shi, Junfen; Fitton, Laura C; Phillips, Roger; O'Higgins, Paul; Fagan, Michael J

    2012-01-01

    Finite elements analysis (FEA) is now used routinely to interpret skeletal form in terms of function in both medical and biological applications. To produce accurate predictions from FEA models, it is essential that the loading due to muscle action is applied in a physiologically reasonable manner. However, it is common for muscle forces to be represented as simple force vectors applied at a few nodes on the model's surface. It is certainly rare for any wrapping of the muscles to be considered, and yet wrapping not only alters the directions of muscle forces but also applies an additional compressive load from the muscle belly directly to the underlying bone surface. This paper presents a method of applying muscle wrapping to high-resolution voxel-based finite element (FE) models. Such voxel-based models have a number of advantages over standard (geometry-based) FE models, but the increased resolution with which the load can be distributed over a model's surface is particularly advantageous, reflecting more closely how muscle fibre attachments are distributed. In this paper, the development, application and validation of a muscle wrapping method is illustrated using a simple cylinder. The algorithm: (1) calculates the shortest path over the surface of a bone given the points of origin and ultimate attachment of the muscle fibres; (2) fits a Non-Uniform Rational B-Spline (NURBS) curve from the shortest path and calculates its tangent, normal vectors and curvatures so that normal and tangential components of the muscle force can be calculated and applied along the fibre; and (3) automatically distributes the loads between adjacent fibres to cover the bone surface with a fully distributed muscle force, as is observed in vivo. Finally, we present a practical application of this approach to the wrapping of the temporalis muscle around the cranium of a macaque skull.

  9. On One-Dimensional Stretching Functions for Finite-Difference Calculations

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1980-01-01

    The class of one dimensional stretching function used in finite difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two sided stretching function, for which the arbitrary slopes at the two ends of the one dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. The general two sided function has many applications in the construction of finite difference grids.

  10. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  11. Integrated control strategy for autonomous decentralized conveyance systems based on distributed MEMS arrays

    NASA Astrophysics Data System (ADS)

    Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki

    2004-07-01

    In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.

  12. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  13. X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method

    NASA Astrophysics Data System (ADS)

    Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo

    2017-12-01

    Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.

  14. Simple Additive Weighting to Diagnose Rabbit Disease

    NASA Astrophysics Data System (ADS)

    Ramadiani; Marissa, Dyna; Jundillah, Muhammad Labib; Azainil; Hatta, Heliza Rahmania

    2018-02-01

    Rabbit is one of the many pets maintained by the general public in Indonesia. Like other pet, rabbits are also susceptible to various diseases. Society in general does not understand correctly the type of rabbit disease and the way of treatment. To help care for sick rabbits it is necessary a decision support system recommendation diagnosis of rabbit disease. The purpose of this research is to make the application of rabbit disease diagnosis system so that can help user in taking care of rabbit. This application diagnoses the disease by tracing the symptoms and calculating the recommendation of the disease using Simple Additive Weighting method. This research produces a web-based decision support system that is used to help rabbit breeders and the general public.

  15. Ground-state energies of simple metals

    NASA Technical Reports Server (NTRS)

    Hammerberg, J.; Ashcroft, N. W.

    1974-01-01

    A structural expansion for the static ground-state energy of a simple metal is derived. Two methods are presented, one an approach based on single-particle band structure which treats the electron gas as a nonlinear dielectric, the other a more general many-particle analysis using finite-temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi-surface distortions, and chemical-potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron-ion interaction and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero-temperature thermodynamic functions of atomic hydrogen are reported.

  16. Structural expansions for the ground state energy of a simple metal

    NASA Technical Reports Server (NTRS)

    Hammerberg, J.; Ashcroft, N. W.

    1973-01-01

    A structural expansion for the static ground state energy of a simple metal is derived. An approach based on single particle band structure which treats the electron gas as a non-linear dielectric is presented, along with a more general many particle analysis using finite temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi surface distortions, and chemical potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron ion interaction, and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero temperature thermodynamic functions of atomic hydrogen are reported.

  17. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  18. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  19. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets.

    PubMed

    Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  20. Water Conservation Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Metzger, Jesse Dean

    2010-12-31

    This software requires inputs of simple water fixture inventory information and calculates the water/energy and cost benefits of various retrofit opportunities. This tool includes water conservation measures for: Low-flow Toilets, Low-flow Urinals, Low-flow Faucets, and Low-flow Showheads. This tool calculates water savings, energy savings, demand reduction, cost savings, and building life cycle costs including: simple payback, discounted payback, net-present value, and savings to investment ratio. In addition this tool also displays the environmental benefits of a project.

  1. Approximate method for calculating convective heat flux on the surface of bodies of simple geometric shapes

    NASA Astrophysics Data System (ADS)

    Kuzenov, V. V.; Ryzhkov, S. V.

    2017-02-01

    The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.

  2. Three dimensional hair model by means particles using Blender

    NASA Astrophysics Data System (ADS)

    Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos

    2010-09-01

    The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.

  3. An electronic rationale for observed initiation rates in ruthenium-mediated olefin metathesis: charge donation in phosphine and N-heterocyclic carbene ligands.

    PubMed

    Getty, Kendra; Delgado-Jaime, Mario Ulises; Kennepohl, Pierre

    2007-12-26

    Ru K-edge XAS data indicate that second generation ruthenium-based olefin metathesis precatalysts (L = N-heterocyclic carbene) possess a more electron-deficient metal center than in the corresponding first generation species (L = tricyclohexylphosphine). This surprising effect is also observed from DFT calculations and provides a simple rationale for the slow phosphine dissociation kinetics previously noted for second-generation metathesis precatalysts.

  4. Accumulated distribution of material gain at dislocation crystal growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakin, V. I., E-mail: rakin@geo.komisc.ru

    2016-05-15

    A model for slowing down the tangential growth rate of an elementary step at dislocation crystal growth is proposed based on the exponential law of impurity particle distribution over adsorption energy. It is established that the statistical distribution of material gain on structurally equivalent faces obeys the Erlang law. The Erlang distribution is proposed to be used to calculate the occurrence rates of morphological combinatorial types of polyhedra, presenting real simple crystallographic forms.

  5. Rapid, in situ detection of cocaine residues based on paper spray ionization coupled with ion mobility spectrometry.

    PubMed

    Li, Ming; Zhang, Jingjing; Jiang, Jie; Zhang, Jing; Gao, Jing; Qiao, Xiaolin

    2014-04-07

    In this paper, a novel approach based on paper spray ionization coupled with ion mobility spectrometry (PSI-IMS) was developed for rapid, in situ detection of cocaine residues in liquid samples and on various surfaces (e.g. glass, marble, skin, wood, fingernails), without tedious sample pretreatment. The obvious advantages of PSI are its low cost, easy operation and simple configuration without using nebulizing gas or discharge gas. Compared with mass spectrometry, ion mobility spectrometry (IMS) takes advantage of its low cost, easy operation, and simple configuration without requiring a vacuum system. Therefore, IMS is a more congruous detection method for PSI in the case of rapid, in situ analysis. For the analysis of cocaine residues in liquid samples, dynamic responses from 5 μg mL(-1) to 200 μg mL(-1) with a linear coefficient (R(2)) of 0.992 were obtained. In this case, the limit of detection (LOD) was calculated to be 2 μg mL(-1) as signal to noise (S/N) was 3 with a relative standard deviation (RSD) of 6.5% for 11 measurements (n = 11). Cocaine residues on various surfaces such as metal, glass, marble, wood, skin, and fingernails were also directly analyzed before wiping the surfaces with a piece of paper. The LOD was calculated to be as low as 5 ng (S/N = 3, RSD = 6.3%, n = 11). This demonstrates the capability of the PSI-IMS method for direct detection of cocaine residues at scenes of cocaine administration. Our results show that PSI-IMS is a simple, sensitive, rapid and economical method for in situ detection of this illicit drug, which could help governments to combat drug abuse.

  6. Considerations on methodological challenges for water footprint calculations.

    PubMed

    Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R

    2012-01-01

    We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.

  7. Optical model calculations of heavy-ion target fragmentation

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.

    1986-01-01

    The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.

  8. Understanding thio-effects in simple phosphoryl systems: role of solvent effects and nucleophile charge† †Electronic supplementary information (ESI) available: A breakdown of calculated activation free energies shown in Table 1, as well as absolute energies and Cartesian coordinates of all key species in this work are presented as ESI. See DOI: 10.1039/c5ob00309a Click here for additional data file.

    PubMed Central

    Carvalho, Alexandra T. P.; O'Donoghue, AnnMarie C.; Hodgson, David R. W.

    2015-01-01

    Recent experimental work (J. Org. Chem., 2012, 77, 5829) demonstrated pronounced differences in measured thio-effects for the hydrolysis of (thio)phosphodichloridates by water and hydroxide nucleophiles. In the present work, we have performed detailed quantum chemical calculations of these reactions, with the aim of rationalizing the molecular bases for this discrimination. The calculations highlight the interplay between nucleophile charge and transition state solvation in SN2(P) mechanisms as the basis of these differences, rather than a change in mechanism. PMID:25797408

  9. Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.

    PubMed

    Meyer, Veronika R

    2003-09-01

    Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.

  10. Numerical calculation of protein-ligand binding rates through solution of the Smoluchowski equation using smoothed particle hydrodynamics

    DOE PAGES

    Pan, Wenxiao; Daily, Michael; Baker, Nathan A.

    2015-05-07

    Background: The calculation of diffusion-controlled ligand binding rates is important for understanding enzyme mechanisms as well as designing enzyme inhibitors. Methods: We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) BC, is considered on the reactive boundaries. This new BC treatment allows for the analysis of enzymes with “imperfect” reaction rates. Results: The numerical method is first verified in simple systems and thenmore » applied to the calculation of ligand binding to a mouse acetylcholinesterase (mAChE) monomer. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Conclusions: Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less

  11. Accelerating potential of mean force calculations for lipid membrane permeation: System size, reaction coordinate, solute-solute distance, and cutoffs

    NASA Astrophysics Data System (ADS)

    Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.

    2016-09-01

    Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.

  12. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  13. An ongoing six-year innovative osteoporosis disease management program: challenges and success in an IPA physician group environment.

    PubMed

    Woo, Ann; Hittell, Jodi; Beardsley, Carrie; Noh, Charles; Stoukides, Cheryl A; Kaul, Alan F

    2004-01-01

    The goal of this ongoing comprehensive osteoporosis disease management initiative is to provide the adult primary care physicians' (PCPs) offices with a program enabling them to systematically identify and manage their population for osteoporosis. For over six years, Hill Physicians Medical Group (Hill Physicians) has implemented multiple strategies to develop a best practice for identifying and treating members who were candidates for osteoporosis therapy. Numerous tools were used to support this disease management effort, including: evidence-based clinical practice guidelines, patient education sessions, the Simple Calculated Osteoporosis Risk Estimation (SCORE) questionnaire tool, member specific reports for PCPs, targeted member mailings, office-based Peripheral Instantaneous X-ray Imaging (PIXI) test and counseling, dual x-ray absorptiometry (DEXA) scan guidelines, and web-based Electronic Simple Calculated Osteoporosis Risk Estimation (eSCORE) questionnaire tools. Hill Physicians tabulated results for patients who completed 2649 SCORE tests, screened 978 patients with PIXI tests, and identified 338 osteopenic and 124 osteoporotic patients. The preliminary results of this unique six-year ongoing educational initiative are slow but promising. New physician offices express interest in participating and those offices that have participated in the program continue to screen for osteoporosis. Hill Physicians' message is consistent and is communicated to the physicians repeatedly in different ways in accordance with the principles of educational outreach. Physicians who have conducted the program have positive feedback from their patients and office staff and have begun to communicate their experience to their peers.

  14. National Stormwater Calculator User's Guide – VERSION 1.1

    EPA Science Inventory

    This document is the user's guide for running EPA's National Stormwater Calculator (http://www.epa.gov/nrmrl/wswrd/wq/models/swc/). The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US.

  15. Infrared properties of three plastic bonded explosive binders

    DOE PAGES

    Hoffman, D. Mark

    2017-08-02

    Here, three polymers are routinely used as binders for plastic bonded explosives by Lawrence Livermore National Laboratory, FK-800, Viton A 100, and Oxy 461. Attenuated total reflectance Fourier transform infrared measurements were performed on 10 different lots of FK-800, 5 different lots of Oxy 461, and 3 different lots of Viton A-100, one sample of Viton VTR 5883 and 2 Fluorel polymers of hexafluoropropene and vinylidene fluoride. The characteristic IR bands were measured. If possible, their vibrational modes were assigned based on literature data. Simple Mopac calculations were used to validate these vibrational mode assignments. Somewhat more sophisticated calculations weremore » run using Gaussian on the same structures.« less

  16. Electrostatics of electron-hole interactions in van der Waals heterostructures

    NASA Astrophysics Data System (ADS)

    Cavalcante, L. S. R.; Chaves, A.; Van Duppen, B.; Peeters, F. M.; Reichman, D. R.

    2018-03-01

    The role of dielectric screening of electron-hole interaction in van der Waals heterostructures is theoretically investigated. A comparison between models available in the literature for describing these interactions is made and the limitations of these approaches are discussed. A simple numerical solution of Poisson's equation for a stack of dielectric slabs based on a transfer matrix method is developed, enabling the calculation of the electron-hole interaction potential at very low computational cost and with reasonable accuracy. Using different potential models, direct and indirect exciton binding energies in these systems are calculated within Wannier-Mott theory, and a comparison of theoretical results with recent experiments on excitons in two-dimensional materials is discussed.

  17. Efficacy of dialysis in peritoneal dialysis: utility of bioimpedance to calculate Kt/V and the search for a target Kt.

    PubMed

    Martínez Fernández, G; Ortega Cerrato, A; Masiá Mondéjar, J; Pérez Rodríguez, A; Llamas Fuentes, F; Gómez Roldán, C; Pérez-Martínez, Juan

    2013-04-01

    To calculate Kt/V, volume (V) is usually obtained by Watson formula, but bioimpedance spectroscopy (BIS) is a simple and applicable technique to determinate V, along with other hydration and nutrition parameters, in peritoneal dialysis (PD) patients. Dialysis efficacy can also be measured with Kt, but no experience exists in PD, so there is no reference/target value for Kt that must be achieved in these patients to be considered adequately dialyzed. We evaluated the efficacy of PD with Kt/V using Watson formula and BIS for V calculation, assessed hydration status in a PD unit by data obtained by BIS, and attempted to find a reference Kt from the Kt/V previously obtained by BIS. In this observational prospective study of 78 PD patients, we measured V using BIS (V bis) and Watson formula (V w) and calculated weekly Kt/V using both volumes (Kt/V bis/V bis and Kt/V w). With the BIS technique, we obtained and subsequently analyzed other hydration status parameters. We achieved a reference Kt, extrapolating the value desired (weekly Kt/V 1.7) to the target Kt using the simple linear regression statistical technique, basing it on the results of the previously calculated Pearson's linear correlation coefficient. Volume was 1.8 l higher by Watson formula than with BIS (p < 0.001). Weekly Kt/V bis was 2.33 ± 0.68, and mean weekly Kt/V w was 2.20 ± 0.63 (p < 0.0001); 60.25 % of patients presented overhydration according to the BIS study (OH >1.1 l). The target value of Kt for the reference weekly Kt/V bis (1.7) was 64.87 l. BIS is a simple, applicable technique for calculating V in dialysis that can be especially useful in PD patients compared with the anthropometric formulas, by the abnormally distributed body water in these patients. Other parameters obtained by BIS will serve to assess both the distribution of body volume and nutritional status in the clinical setting. The target Kt value obtained from Kt/V bis allowed us to measure the efficacy of PD in a practical way, omitting V measurement.

  18. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Determination of the distribution constants of aromatic compounds and steroids in biphasic micellar phosphonium ionic liquid/aqueous buffer systems by capillary electrokinetic chromatography.

    PubMed

    Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K

    2013-09-20

    The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    PubMed

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  1. Benchmarking the Bethe–Salpeter Formalism on a Standard Organic Molecular Set

    PubMed Central

    2015-01-01

    We perform benchmark calculations of the Bethe–Salpeter vertical excitation energies for the set of 28 molecules constituting the well-known Thiel’s set, complemented by a series of small molecules representative of the dye chemistry field. We show that Bethe–Salpeter calculations based on a molecular orbital energy spectrum obtained with non-self-consistent G0W0 calculations starting from semilocal DFT functionals dramatically underestimate the transition energies. Starting from the popular PBE0 hybrid functional significantly improves the results even though this leads to an average −0.59 eV redshift compared to reference calculations for Thiel’s set. It is shown, however, that a simple self-consistent scheme at the GW level, with an update of the quasiparticle energies, not only leads to a much better agreement with reference values, but also significantly reduces the impact of the starting DFT functional. On average, the Bethe–Salpeter scheme based on self-consistent GW calculations comes close to the best time-dependent DFT calculations with the PBE0 functional with a 0.98 correlation coefficient and a 0.18 (0.25) eV mean absolute deviation compared to TD-PBE0 (theoretical best estimates) with a tendency to be red-shifted. We also observe that TD-DFT and the standard adiabatic Bethe–Salpeter implementation may differ significantly for states implying a large multiple excitation character. PMID:26207104

  2. Sunspots and Their Simple Harmonic Motion

    ERIC Educational Resources Information Center

    Ribeiro, C. I.

    2013-01-01

    In this paper an example of a simple harmonic motion, the apparent motion of sunspots due to the Sun's rotation, is described, which can be used to teach this subject to high-school students. Using real images of the Sun, students can calculate the star's rotation period with the simple harmonic motion mathematical expression.

  3. Making Temporal Logic Calculational: A Tool for Unification and Discovery

    NASA Astrophysics Data System (ADS)

    Boute, Raymond

    In temporal logic, calculational proofs beyond simple cases are often seen as challenging. The situation is reversed by making temporal logic calculational, yielding shorter and clearer proofs than traditional ones, and serving as a (mental) tool for unification and discovery. A side-effect of unifying theories is easier access by practicians. The starting point is a simple generic (software tool independent) Functional Temporal Calculus (FTC). Specific temporal logics are then captured via endosemantic functions. This concept reflects tacit conventions throughout mathematics and, once identified, is general and useful. FTC also yields a reasoning style that helps discovering theorems by calculation rather than just proving given facts. This is illustrated by deriving various theorems, most related to liveness issues in TLA+, and finding strengthenings of known results. Educational issues are addressed in passing.

  4. Simple systematization of vibrational excitation cross-section calculations for resonant electron-molecule scattering in the boomerang and impulse models.

    PubMed

    Sarma, Manabendra; Adhikari, S; Mishra, Manoj K

    2007-01-28

    Vibrational excitation (nu(f)<--nu(i)) cross-sections sigma(nu(f)<--nu(i) )(E) in resonant e-N(2) and e-H(2) scattering are calculated from transition matrix elements T(nu(f),nu(i) )(E) obtained using Fourier transform of the cross correlation function , where psi(nu(i))(R,t) approximately =e(-iH(A(2))-(R)t/h phi(nu(i))(R) with time evolution under the influence of the resonance anionic Hamiltonian H(A(2) (-))(A(2) (-)=N(2)(-)/H(2) (-)) implemented using Lanczos and fast Fourier transforms. The target (A(2)) vibrational eigenfunctions phi(nu(i))(R) and phi(nu(f))(R) are calculated using Fourier grid Hamiltonian method applied to potential energy (PE) curves of the neutral target. Application of this simple systematization to calculate vibrational structure in e-N(2) and e-H(2) scattering cross-sections provides mechanistic insights into features underlying presence/absence of structure in e-N(2) and e-H(2) scattering cross-sections. The results obtained with approximate PE curves are in reasonable agreement with experimental/calculated cross-section profiles, and cross correlation functions provide a simple demarcation between the boomerang and impulse models.

  5. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  6. Divergence thrust loss calculations for convergent-divergent nozzles: Extensions to the classical case

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1991-01-01

    The analytical derivations of the non-axial thrust divergence losses for convergent-divergent nozzles are described as well as how these calculations are embodied in the Navy/NASA engine computer program. The convergent-divergent geometries considered are simple classic axisymmetric nozzles, two dimensional rectangular nozzles, and axisymmetric and two dimensional plug nozzles. A simple, traditional, inviscid mathematical approach is used to deduce the influence of the ineffectual non-axial thrust as a function of the nozzle exit divergence angle.

  7. Effect of steam addition on cycle performance of simple and recuperated gas turbines

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.

    1979-01-01

    Results are presented for the cycle efficiency and specific power of simple and recuperated gas turbine cycles in which steam is generated and used to increase turbine flow. Calculations showed significant improvements in cycle efficiency and specific power by adding steam. The calculations were made using component efficiencies and loss assumptions typical of stationary powerplants. These results are presented for a range of operating temperatures and pressures. Relative heat exchanger size and the water use rate are also examined.

  8. Learning to Calculate and Learning Mathematics.

    ERIC Educational Resources Information Center

    Fearnley-Sander, Desmond

    1980-01-01

    A calculator solution of a simple computational problem is discussed with emphasis on its ramifications for the understanding of some fundamental theorems of pure mathematics and techniques of computing. (Author/MK)

  9. A novel hazard assessment method for biomass gasification stations based on extended set pair analysis

    PubMed Central

    Yan, Fang; Xu, Kaili; Li, Deshun; Cui, Zhikai

    2017-01-01

    Biomass gasification stations are facing many hazard factors, therefore, it is necessary to make hazard assessment for them. In this study, a novel hazard assessment method called extended set pair analysis (ESPA) is proposed based on set pair analysis (SPA). However, the calculation of the connection degree (CD) requires the classification of hazard grades and their corresponding thresholds using SPA for the hazard assessment. In regard to the hazard assessment using ESPA, a novel calculation algorithm of the CD is worked out when hazard grades and their corresponding thresholds are unknown. Then the CD can be converted into Euclidean distance (ED) by a simple and concise calculation, and the hazard of each sample will be ranked based on the value of ED. In this paper, six biomass gasification stations are introduced to make hazard assessment using ESPA and general set pair analysis (GSPA), respectively. By the comparison of hazard assessment results obtained from ESPA and GSPA, the availability and validity of ESPA can be proved in the hazard assessment for biomass gasification stations. Meanwhile, the reasonability of ESPA is also justified by the sensitivity analysis of hazard assessment results obtained by ESPA and GSPA. PMID:28938011

  10. Resonant Drag Instability of Grains Streaming in Fluids

    NASA Astrophysics Data System (ADS)

    Squire, J.; Hopkins, P. F.

    2018-03-01

    We show that grains streaming through a fluid are generically unstable if their velocity, projected along some direction, matches the phase velocity of a fluid wave (linear oscillation). This can occur whenever grains stream faster than any fluid wave. The wave itself can be quite general—sound waves, magnetosonic waves, epicyclic oscillations, and Brunt–Väisälä oscillations each generate instabilities, for example. We derive a simple expression for the growth rates of these “resonant drag instabilities” (RDI). This expression (i) illustrates why such instabilities are so virulent and generic and (ii) allows for simple analytic computation of RDI growth rates and properties for different fluids. As examples, we introduce several new instabilities, which could see application across a variety of physical systems from atmospheres to protoplanetary disks, the interstellar medium, and galactic outflows. The matrix-based resonance formalism we introduce can also be applied more generally in other (nonfluid) contexts, providing a simple means for calculating and understanding the stability properties of interacting systems.

  11. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  12. Programmable calculator uses equation to figure steady-state gas-pipeline flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmberg, E.

    Because it is accurate and consistent over a wide range of variables, the Colebrook-White (C-W) formula serves as the basis for many methods of calculating turbulent flow in gas pipelines. Oilconsult reveals a simple way to adapt the C-W formula to calculate steady-state pipeline flow using the TI-59 programmable calculator.

  13. Theory of Auger core-valence-valence processes in simple metals. II. Dynamical and surface effects on Auger line shapes

    NASA Astrophysics Data System (ADS)

    Almbladh, C.-O.; Morales, A. L.

    1989-02-01

    Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.

  14. Simple Map in Action-Angle Coordinates.

    NASA Astrophysics Data System (ADS)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima

    2008-04-01

    The simple map is the simplest map that has the topology of a divertor tokamak. The simple map has three canonical representations: (i) the natural coordinates - toroidal magnetic flux and poloidal angle (ψ,θ), (ii) the physical coordinates - the physical variables (R,Z) or (X,Y), and (iii) the action-angle coordinates - (J,θ) or magnetic coordinates (ψ, θ). All three are canonical coordinates for field lines. The simple map in the (X,Y) representation has been studied extensively ^1, 2. Here we analytically calculate the action-angle coordinates and safety factor q for the simple map. We construct the equilibrium generating function for the simple map in action-angle coordinates. We derive the simple map in action-angle representation, and calculate the stochastic broadening of the ideal separatrix due to topological noise in action-angle representation. We also show how the geometric effects such as elongation, the height, and width of the ideal separatrix surface can be investigated using a slight modification of the simple map in action-angle representation. This work is supported by the following grants US Department of Energy - OFES DE-FG02-01ER54624 and DE-FG02-04ER54793 and National Science Foundation - HRD-0630372 and 0411394. [1] A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys Lett A, 364 140-145 (2007). [2] A. Punjabi, A. Verma, and A. Boozer, Phys.Rev. Lett. 69, 3322 (1992).

  15. Calculation of Temperature Rise in Calorimetry.

    ERIC Educational Resources Information Center

    Canagaratna, Sebastian G.; Witt, Jerry

    1988-01-01

    Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)

  16. Solvent-Ion Interactions in Salt Water: A Simple Experiment.

    ERIC Educational Resources Information Center

    Willey, Joan D.

    1984-01-01

    Describes a procedurally quick, simple, and inexpensive experiment which illustrates the magnitude and some effects of solvent-ion interactions in aqueous solutions. Theoretical information, procedures, and examples of temperature, volume and hydration number calculations are provided. (JN)

  17. Experiment D009: Simple navigation

    NASA Technical Reports Server (NTRS)

    Silva, R. M.; Jorris, T. R.; Vallerie, E. M., III

    1971-01-01

    Space position-fixing techniques have been investigated by collecting data on the observable phenomena of space flight that could be used to solve the problem of autonomous navigation by the use of optical data and manual computations to calculate the position of a spacecraft. After completion of the developmental and test phases, the product of the experiment would be a manual-optical technique of orbital space navigation that could be used as a backup to onboard and ground-based spacecraft-navigation systems.

  18. Central arterial pressure assessment with intensity POF sensor

    NASA Astrophysics Data System (ADS)

    Leitão, Cátia; Gonçalves, Steve; Antunes, Paulo; Bastos, José M.; Pinto, João. L.; André, Paulo

    2015-09-01

    The central pressure monitoring is considered a new key factor in hypertension assessment and cardiovascular prevention. In this work, it is presented the central arterial systolic pressure assessment with an intensity based POF sensor. The device was tested in four subjects, and stable pulse waves were obtained, allowing the calculation of the central pressure for all the subjects. The results shown that the sensor performs reliably, being a simple and low-cost solution to the intended application.

  19. Crystal structure refinement of reedmergnerite, the boron analog of albite

    USGS Publications Warehouse

    Clark, J.R.; Appleman, D.E.

    1960-01-01

    Ordering of boron in a feldspar crystallographic site T1(0) has been found in reedmergnerite, which has silicon-oxygen and sodium-oxygen distances comparable to those in isostructural low albite. If a simple ionic model is assumed, calculated bond strengths yield a considerable charge imbalance in reedmergnerite, an indication of the inadequacy of the model with respect to these complex structures and of the speculative nature of conclusions based on such a model.

  20. A pyrene-based fluorescent sensor for Zn2+ ions: a molecular 'butterfly'.

    PubMed

    Manandhar, Erendra; Broome, J Hugh; Myrick, Jalin; Lagrone, Whitney; Cragg, Peter J; Wallace, Karl J

    2011-08-21

    A simple pyrene-based triazole receptor has been synthesised and shown to self-assemble in the presence of ZnCl(2) in an exclusively 2:1 ratio, whereas a mixture of 2:1 and 1:1 ratios are observed for other Zn(2+) salts. The pyrene units are syn in orientation; this is supported by a strong excimer signal observed at 410 nm in the presence of ZnCl(2) in acetonitrile. DFT calculations and 2D NMR support the proposed structure. This journal is © The Royal Society of Chemistry 2011

  1. CFD Analysis for Assessing the Effect of Wind on the Thermal Control of the Mars Science Laboratory Curiosity Rover

    NASA Technical Reports Server (NTRS)

    Bhandari, Pradeep; Anderson, Kevin

    2013-01-01

    The challenging range of landing sites for which the Mars Science Laboratory Rover was designed, requires a rover thermal management system that is capable of keeping temperatures controlled across a wide variety of environmental conditions. On the Martian surface where temperatures can be as cold as -123 C and as warm as 38 C, the rover relies upon a Mechanically Pumped Fluid Loop (MPFL) Rover Heat Rejection System (RHRS) and external radiators to maintain the temperature of sensitive electronics and science instruments within a -40 C to 50 C range. The RHRS harnesses some of the waste heat generated from the rover power source, known as the Multi Mission Radioisotope Thermoelectric Generator (MMRTG), for use as survival heat for the rover during cold conditions. The MMRTG produces 110 W of electrical power while generating waste heat equivalent to approximately 2000 W. Heat exchanger plates (hot plates) positioned close to the MMRTG pick up this survival heat from it by radiative heat transfer. Winds on Mars can be as fast as 15 m/s for extended periods. They can lead to significant heat loss from the MMRTG and the hot plates due to convective heat pick up from these surfaces. Estimation of this convective heat loss cannot be accurately and adequately achieved by simple textbook based calculations because of the very complicated flow fields around these surfaces, which are a function of wind direction and speed. Accurate calculations necessitated the employment of sophisticated Computational Fluid Dynamics (CFD) computer codes. This paper describes the methodology and results of these CFD calculations. Additionally, these results are compared to simple textbook based calculations that served as benchmarks and sanity checks for them. And finally, the overall RHRS system performance predictions will be shared to show how these results affected the overall rover thermal performance.

  2. A simple quinolone Schiff-base containing CHEF based fluorescence 'turn-on' chemosensor for distinguishing Zn2+ and Hg2+ with high sensitivity, selectivity and reversibility.

    PubMed

    Dong, Yuwei; Fan, Ruiqing; Chen, Wei; Wang, Ping; Yang, Yulin

    2017-05-23

    A new simple 'dual' chemosensor MQA ((E)-2-methoxy-N-((quinolin-2-yl)methylene)aniline) for distinguishing Zn 2+ and Hg 2+ has been designed, synthesized and characterized. The sensor showed excellent selectivity and sensitivity with a fluorescence enhancement to Zn 2+ /Hg 2+ over other commonly coexisting cations (such as Na + , Mg 2+ , Al 3+ , K + , Mn 2+ , Fe 2+ , Fe 3+ , Co 2+ , Ni 2+ , Cu 2+ , Ga 3+ , Cd 2+ , In 3+ and Pb 2+ ) in DMSO-H 2 O solution (1/99 v/v), which was reversible with the addition of ethylenediaminetetraacetic acid (EDTA). The detection limit for Zn 2+ /Hg 2+ by MQA both reached the 10 -8 M level. The 1 : 1 ligand-to-metal coordination patterns of the MQA-Zn2+ and MQA-Hg2+ were calculated through a Job's plot and ESI-MS spectra, and were further confirmed by X-ray crystal structures of complexes MQA-Zn2+ and MQA-Hg2+. This chemosensor can recognize similar metal ions by coherently utilizing intramolecular charge transfer (ICT) and different electronic affinities of various metal ions. DFT calculations have revealed that the energy gap between the HOMO and LUMO of MQA has decreased upon coordination with Zn(ii)/Hg(ii).

  3. Determination of the origin and magnitude of Al/Si ordering enthalpy in framework aluminosilicates from ab initio calculations

    NASA Astrophysics Data System (ADS)

    McConnell, J. D. C.; De Vita, A.; Kenny, S. D.; Heine, V.

    Ab initio total energy calculations based on a new optimised oxygen pseudopotential has been used to determine the enthalpy of disorder for the exchange of Al and Si in tetrahedral coordination in simple derivative aluminosilicate structures based on the high temperature tridymite structure. The problem has been studied as a function of defect interaction, and defect concentration, and the results indicate that the energy for Al/Al neighbouring tetrahedra can be assigned primarily to two effects, the first, a coulombic effect, associated with the disturbed charge distribution, and the second associated with the strain related to misfit due to the very different dimensions of the Si and Al containing tetrahedra. In practice each of these effects contributes approximately 0.2 eV per Al-Al neighbour to the overal disorder enthalpy. These simple results were obtained after a careful study of possible chemical interaction between adjacent Al/Si containing tetrahedra which showed that chemical interaction was effectively absent. Since individual Al/Si tetrahedra proved to be discrete entities that are individually heavily screened by the shared oxygens it follows that coulombic and strain effects in disorder effectively account for the whole of the disorder enthalpy. The complete set of results have been used to establish new criteria for the structure and disorder enthalpies of the feldspar group of minerals and their long period derivatives.

  4. A simple approach to estimate daily loads of total, refractory, and labile organic carbon from their seasonal loads in a watershed.

    PubMed

    Ouyang, Ying; Grace, Johnny M; Zipperer, Wayne C; Hatten, Jeff; Dewey, Janet

    2018-05-22

    Loads of naturally occurring total organic carbons (TOC), refractory organic carbon (ROC), and labile organic carbon (LOC) in streams control the availability of nutrients and the solubility and toxicity of contaminants and affect biological activities through absorption of light and complex metals with production of carcinogenic compounds. Although computer models have become increasingly popular in understanding and management of TOC, ROC, and LOC loads in streams, the usefulness of these models hinges on the availability of daily data for model calibration and validation. Unfortunately, these daily data are usually insufficient and/or unavailable for most watersheds due to a variety of reasons, such as budget and time constraints. A simple approach was developed here to calculate daily loads of TOC, ROC, and LOC in streams based on their seasonal loads. We concluded that the predictions from our approach adequately match field measurements based on statistical comparisons between model calculations and field measurements. Our approach demonstrates that an increase in stream discharge results in increased stream TOC, ROC, and LOC concentrations and loads, although high peak discharge did not necessarily result in high peaks of TOC, ROC, and LOC concentrations and loads. The approach developed herein is a useful tool to convert seasonal loads of TOC, ROC, and LOC into daily loads in the absence of measured daily load data.

  5. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.

    2016-01-04

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  6. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  7. Modeling Electronic-Nuclear Interactions for Excitation Energy Transfer Processes in Light-Harvesting Complexes.

    PubMed

    Lee, Mi Kyung; Coker, David F

    2016-08-18

    An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.

  8. Study of high-performance canonical molecular orbitals calculation for proteins

    NASA Astrophysics Data System (ADS)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2017-11-01

    The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.

  9. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  10. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  11. Why convective heat transport in the solar nebula was inefficient

    NASA Technical Reports Server (NTRS)

    Cassen, P.

    1993-01-01

    The radial distributions of the effective temperatures of circumstellar disks associated with pre-main sequence (T Tauri) stars are relatively well-constrained by ground-based and spacecraft infrared photometry and radio continuum observations. If the mechanisms by which energy is transported vertically in the disks are understood, these data can be used to constrain models of the thermal structure and evolution of solar nebula. Several studies of the evolution of the solar nebula have included the calculation of the vertical transport of heat by convection. Such calculations rely on a mixing length theory of transport and some assumption regarding the vertical distribution of internal dissipation. In all cases, the results of these calculations indicate that transport by radiation dominates that by convection, even when the nebula is convectively unstable. A simple argument that demonstrates the generality (and limits) of this result, regardless of the details of mixing length theory or the precise distribution of internal heating is presented. It is based on the idea that the radiative gradient in an optically thick nebula generally does not greatly exceed the adiabatic gradient.

  12. Long-path measurements of pollutants and micrometeorology over Highway 401 in Toronto

    NASA Astrophysics Data System (ADS)

    You, Yuan; Staebler, Ralf M.; Moussa, Samar G.; Su, Yushan; Munoz, Tony; Stroud, Craig; Zhang, Junhua; Moran, Michael D.

    2017-11-01

    Traffic emissions contribute significantly to urban air pollution. Measurements were conducted over Highway 401 in Toronto, Canada, with a long-path Fourier transform infrared (FTIR) spectrometer combined with a suite of micrometeorological instruments to identify and quantify a range of air pollutants. Results were compared with simultaneous in situ observations at a roadside monitoring station, and with output from a special version of the operational Canadian air quality forecast model (GEM-MACH). Elevated mixing ratios of ammonia (0-23 ppb) were observed, of which 76 % were associated with traffic emissions. Hydrogen cyanide was identified at mixing ratios between 0 and 4 ppb. Using a simple dispersion model, an integrated emission factor of on average 2.6 g km-1 carbon monoxide was calculated for this defined section of Highway 401, which agreed well with estimates based on vehicular emission factors and observed traffic volumes. Based on the same dispersion calculations, vehicular average emission factors of 0.04, 0.36, and 0.15 g km-1 were calculated for ammonia, nitrogen oxide, and methanol, respectively.

  13. Ab initio calculation of diffusion barriers for Cu adatom hopping on Cu(1 0 0) surface and evolution of atomic configurations

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Gan, Jie; Li, Qian; Gao, Kun; Sun, Jian; Xu, Ning; Ying, Zhifeng; Wu, Jiada

    2011-06-01

    The self-diffusion dynamics of Cu adatoms on Cu(1 0 0) surface has been studied based on the calculation of the energy barriers for various hopping events using lattice-gas based approach and a modified model. To simplify the description of the interactions and the calculation of the energy barrier, a three-tier hierarchy of description of atomic configurations was conceived in which the active adatom and its nearest atoms were chosen to constitute basic configuration and taken as a whole to study many-body interactions of the atoms in various atomic configurations, whereas the impacts of the next nearest atoms on the diffusion of the active adatom were considered as multi-site interactions. Besides the simple hopping of single adatoms, the movements of dimers and trimers as the results of multiple hopping events have also been examined. Taking into account the hopping events of all adatoms, the stability of atomic configurations has been examined and the evolution of atomic configurations has also been analyzed.

  14. Large calculation of the flow over a hypersonic vehicle using a GPU

    NASA Astrophysics Data System (ADS)

    Elsen, Erich; LeGresley, Patrick; Darve, Eric

    2008-12-01

    Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.

  15. Calculation of Compressible Flows past Aerodynamic Shapes by Use of the Streamline Curvature

    NASA Technical Reports Server (NTRS)

    Perl, W

    1947-01-01

    A simple approximate method is given for the calculation of isentropic irrotational flows past symmetrical airfoils, including mixed subsonic-supersonic flows. The method is based on the choice of suitable values for the streamline curvature in the flow field and the subsequent integration of the equations of motion. The method yields limiting solutions for potential flow. The effect of circulation is considered. A comparison of derived velocity distributions with existing results that are based on calculation to the third order in the thickness ratio indicated satisfactory agreement. The results are also presented in the form of a set of compressibility correction rules that lie between the Prandtl-Glauert rule and the von Karman-Tsien rule (approximately). The different rules correspond to different values of the local shape parameter square root sign YC sub a, in which Y is the ordinate and C sub a is the curvature at a point on an airfoil. Bodies of revolution, completely supersonic flows, and the significance of the limiting solutions for potential flow are also briefly discussed.

  16. Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murch, G.E.; Thorn, R.J.

    1978-11-01

    A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less

  17. Mechanical properties of fullerite of various composition

    NASA Astrophysics Data System (ADS)

    Rysaeva, L. Kh.

    2017-12-01

    Molecular dynamics simulation is used to study the structures of fullerite of various composition as well as their mechanical properties. Fullerites based on fullerene C60 with simple cubic and face-centered packing, fullerene-like molecule C48 and fullerene C240 with simple cubic packing are studied. Compliance and stiffness coefficients are calculated for fullerites C60 and C48. For fullerite C240, C60, and C48, deformation behavior under the effect of hydrostatic compression is also investigated. It is shown that the fullerenes in the fullerite remain almost spherical up to high values of compressive strain, as a result of which the fullerite is an elastic medium up to densities of 2.5 g/cm3. The increasing stiffness and strength under an applied compression is found for all the considered fullerites.

  18. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  19. MELTS_Excel: A Microsoft Excel-based MELTS interface for research and teaching of magma properties and evolution

    NASA Astrophysics Data System (ADS)

    Gualda, Guilherme A. R.; Ghiorso, Mark S.

    2015-01-01

    thermodynamic modeling software MELTS is a powerful tool for investigating crystallization and melting in natural magmatic systems. Rhyolite-MELTS is a recalibration of MELTS that better captures the evolution of silicic magmas in the upper crust. The current interface of rhyolite-MELTS, while flexible, can be somewhat cumbersome for the novice. We present a new interface that uses web services consumed by a VBA backend in Microsoft Excel©. The interface is contained within a macro-enabled workbook, where the user can insert the model input information and initiate computations that are executed on a central server at OFM Research. Results of simple calculations are shown immediately within the interface itself. It is also possible to combine a sequence of calculations into an evolutionary path; the user can input starting and ending temperatures and pressures, temperature and pressure steps, and the prevailing oxidation conditions. The program shows partial updates at every step of the computations; at the conclusion of the calculations, a series of data sheets and diagrams are created in a separate workbook, which can be saved independently of the interface. Additionally, the user can specify a grid of temperatures and pressures and calculate a phase diagram showing the conditions at which different phases are present. The interface can be used to apply the rhyolite-MELTS geobarometer. We demonstrate applications of the interface using an example early-erupted Bishop Tuff composition. The interface is simple to use and flexible, but it requires an internet connection. The interface is distributed for free from http://melts.ofm-research.org.

  20. The corneal transplant score: a simple corneal graft candidate calculator.

    PubMed

    Rosenfeld, Eldar; Varssano, David

    2013-07-01

    Shortage of corneas for transplantation has created long waiting lists in most countries. Transplant calculators are available for many organs. The purpose of this study is to describe a simple automatic scoring system for keratoplasty recipient candidates, based on several parameters that we consider most relevant for tissue allocation, and to compare the system's accuracy in predicting decisions made by a cornea specialist. Twenty pairs of candidate data were randomly created on an electronic spreadsheet. A single priority score was computed from the data of each candidate. A cornea surgeon and the automated system then decided independently which candidate in each pair should have surgery if only a single cornea was available. The scoring system can calculate values between 0 (lowest priority) and 18 (highest priority) for each candidate. Average score value in our randomly created cohort was 6.35 ± 2.38 (mean ± SD), range 1.28 to 10.76. Average score difference between the candidates in each pair was 3.12 ± 2.10, range 0.08 to 8.45. The manual scoring process, although theoretical, was mentally and emotionally demanding for the surgeon. Agreement was achieved between the human decision and the calculated value in 19 of 20 pairs. Disagreement was reached in the pair with the lowest score difference (0.08). With worldwide donor cornea shortage, waiting for transplantation can be long. Manual sorting of priority for transplantation in a long waiting list is difficult, time-consuming and prone to error. The suggested system may help achieve a justified distribution of available tissue.

  1. A periodic mixed gaussians-plane waves DFT study on simple thiols on Au(111): adsorbate species, surface reconstruction, and thiols functionalization.

    PubMed

    Rajaraman, Gopalan; Caneschi, Andrea; Gatteschi, Dante; Totti, Federico

    2011-03-07

    Here we present DFT calculations based on a periodic mixed gaussians/plane waves approach to study the energetics, structure, bonding of SAMs of simple thiols on Au(111). Several open issues such as structure, bonding and the nature of adsorbate are taken into account. We started with methyl thiols (MeSH) on Au(111) to establish the nature of the adsorbate. We have considered several structural models embracing the reconstructed surface scenario along with the MeS˙-Au(ad)-MeS˙ type motif put forward in recent years. Our calculations suggest a clear preference for the homolytic cleavage of the S-H bond leading to a stable MeS˙ on a gold surface. In agreement with the recent literature studies, the reconstructed models of the MeS˙ species are found to be energetically preferred over unreconstructed models. Besides, our calculations reveal that the model with 1:2 Au(ad)/thiols ratio, i.e. MeS˙-Au(ad)-MeS˙, is energetically preferred compared to the clean and 1:1 ratio models, in agreement with the experimental and theoretical evidences. We have also performed Molecular Orbital/Natural Bond Orbital, MO/NBO, analysis to understand the electronic structure and bonding in different structural motifs and many useful insights have been gained. Finally, the studies have then been extended to alkyl thiols of the RSR' (R, R' = Me, Et and Ph) type and here our calculations again reveal a preference for the RS˙ type species adsorption for clean as well as for reconstructed 1:2 Au(ad)/thiols ratio models.

  2. Equilibrium properties of simple metal thin films in the self-compressed stabilized jellium model.

    PubMed

    Mahmoodi, T; Payami, M

    2009-07-01

    In this work, we have applied the self-compressed stabilized jellium model to predict the equilibrium properties of isolated thin Al, Na and Cs slabs. To make a direct correspondence to atomic slabs, we have considered only those L values that correspond to n-layered atomic slabs with 2≤n≤20, for surface indices (100), (110), and (111). The calculations are based on the density functional theory and self-consistent solution of the Kohn-Sham equations in the local density approximation. Our results show that firstly, the quantum size effects are significant for slabs with sizes smaller than or near to the Fermi wavelength of the valence electrons λ(F), and secondly, some slabs expand while others contract with respect to the bulk spacings. Based on the results, we propose a criterion for realization of significant quantum size effects that lead to expansion of some thin slabs. For more justification of the criterion, we have tested it on Li slabs for 2≤n≤6. We have compared our Al results with those obtained from using all-electron or pseudo-potential first-principles calculations. This comparison shows excellent agreements for Al(100) work functions, and qualitatively good agreements for the other work functions and surface energies. These agreements justify the way we have used the self-compressed stabilized jellium model for the correct description of the properties of simple metal slab systems. On the other hand, our results for the work functions and surface energies of large- n slabs are in good agreement with those obtained from applying the stabilized jellium model for semi-infinite systems. In addition, we have performed the slab calculations in the presence of surface corrugation for selected Al slabs and have shown that the results are worsened.

  3. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  4. [Is there a place for the Glasgow-Blatchford score in the management of upper gastrointestinal bleeding?].

    PubMed

    Jerraya, Hichem; Bousslema, Amine; Frikha, Foued; Dziri, Chadli

    2011-12-01

    Upper gastrointestinal bleeding is a frequent cause for emergency hospital admission. Most severity scores include in their computation the endoscopic findings. The Glasgow-Blatchford score is a validated score that is easy to calculate based on simple clinical and biological variables that can identify patients with a low or a high risk of needing a therapeutic (interventional endoscopy, surgery and/ or transfusions). To validate retrospectively the Glasgow-Blatchford Score (GBS). The study examined all patients admitted in both the general surgery department as of Anesthesiology of the Regional Hospital of Sidi Bouzid. There were 50 patients, which the mean age was 58 years and divided into 35 men and 15 women. In all these patients, we calculated the GBS. Series were divided into 2 groups, 26 cases received only medical treatment and 24 cases required transfusion and / or surgery. Univariate analysis was performed for comparison of these two groups then the ROC curve was used to identify the 'Cut off point' of GBS. Sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) with confidence interval 95% were calculated. The SGB was significantly different between the two groups (p <0.0001). Using the ROC curve, it was determined that for the threshold of GBS ³ 7, Se = 96% (88-100%), Sp = 69% (51-87%), PPV = 74% (59 -90%) and NPV = 95% (85-100%). This threshold is interesting as to its VPN. Indeed, if GBS <7, we must opt for medical treatment to the risk of being wrong in only 5% of cases. The Glasgow-Blatchford score is based on simple clinical and laboratory variables. It can recognize in the emergency department the cases that require medical treatment and those whose support could need blood transfusions and / or surgical treatment.

  5. Carbon Footprint Calculator | Climate Change | US EPA

    EPA Pesticide Factsheets

    2016-12-12

    An interactive calculator to estimate your household's carbon footprint. This tool will estimate carbon pollution emissions from your daily activities and show how to reduce your emissions and save money through simple steps.

  6. Carbon Footprint Calculator | Climate Change | US EPA

    EPA Pesticide Factsheets

    2016-07-14

    An interactive calculator to estimate your household's carbon footprint. This tool will estimate carbon pollution emissions from your daily activities and show how to reduce your emissions and save money through simple steps.

  7. Carbon Footprint Calculator | Climate Change | US EPA

    EPA Pesticide Factsheets

    2016-02-23

    An interactive calculator to estimate your household's carbon footprint. This tool will estimate carbon pollution emissions from your daily activities and show how to reduce your emissions and save money through simple steps.

  8. Single-molecular diodes based on opioid derivatives.

    PubMed

    Siqueira, M R S; Corrêa, S M; Gester, R M; Del Nero, J; Neto, A M J C

    2015-12-01

    We propose an efficient single-molecule rectifier based on a derivative of opioid. Electron transport properties are investigated within the non-equilibrium Green's function formalism combined with density functional theory. The analysis of the current-voltage characteristics indicates obvious diode-like behavior. While heroin presents rectification coefficient R>1, indicating preferential electronic current from electron-donating to electron-withdrawing, 3 and 6-acetylmorphine and morphine exhibit contrary behavior, R<1. Our calculations indicate that the simple inclusion of acetyl groups modulate a range of devices, which varies from simple rectifying to resonant-tunneling diodes. In particular, the rectification rations for heroin diodes show microampere electron current with a maximum of rectification (R=9.1) at very low bias voltage of ∼0.6 V and (R=14.3)∼1.8 V with resistance varying between 0.4 and 1.5 M Ω. Once most of the current single-molecule diodes usually rectifies in nanoampere, are not stable over 1.0 V and present electrical resistance around 10 M. Molecular devices based on opioid derivatives are promising in molecular electronics.

  9. 49 CFR 1141.1 - Procedures to calculate interest rates.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the portion of the year covered by the interest rate. A simple multiplication of the nominal rate by... 49 Transportation 8 2010-10-01 2010-10-01 false Procedures to calculate interest rates. 1141.1... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE PROCEDURES TO CALCULATE INTEREST RATES...

  10. Matrix operator theory of radiative transfer. I - Rayleigh scattering.

    NASA Technical Reports Server (NTRS)

    Plass, G. N.; Kattawar, G. W.; Catchings, F. E.

    1973-01-01

    An entirely rigorous method for the solution of the equations for radiative transfer based on the matrix operator theory is reviewed. The advantages of the present method are: (1) all orders of the reflection and transmission matrices are calculated at once; (2) layers of any thickness may be combined, so that a realistic model of the atmosphere can be developed from any arbitrary number of layers, each with different properties and thicknesses; (3) calculations can readily be made for large optical depths and with highly anisotropic phase functions; (4) results are obtained for any desired value of the surface albedo including the value unity and for a large number of polar and azimuthal angles; (5) all fundamental equations can be interpreted immediately in terms of the physical interactions appropriate to the problem; and (6) both upward and downward radiance can be calculated at interior points from relatively simple expressions.

  11. A simple and accurate method for calculation of the structure factor of interacting charged spheres.

    PubMed

    Wu, Chu; Chan, Derek Y C; Tabor, Rico F

    2014-07-15

    Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Numerical noise prediction in fluid machinery

    NASA Astrophysics Data System (ADS)

    Pantle, Iris; Magagnato, Franco; Gabi, Martin

    2005-09-01

    Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.

  13. Simulation of xenon, uranium vacancy and interstitial diffusion and grain boundary segregation in UO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Anders D.; Tonks, Michael R.; Casillas, Luis

    2014-10-31

    In light water reactor fuel, gaseous fission products segregate to grain boundaries, resulting in the nucleation and growth of large intergranular fission gas bubbles. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations 1, continuum models for diffusion of xenon (Xe), uranium (U) vacancies and U interstitials in UO 2 have been derived for both intrinsic conditions and under irradiation. Segregation of Xe to grain boundaries is described by combining the bulk diffusion model with a model for the interaction between Xe atoms and three different grain boundaries in UO 2 ( Σ5 tilt, Σ5more » twist and a high angle random boundary),as derived from atomistic calculations. All models are implemented in the MARMOT phase field code, which is used to calculate effective Xe and U diffusivities as well as redistribution for a few simple microstructures.« less

  14. Application of simple all-sky imagers for the estimation of aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph

    2017-06-01

    Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.

  15. Effective lattice Hamiltonian for monolayer tin disulfide: Tailoring electronic structure with electric and magnetic fields

    NASA Astrophysics Data System (ADS)

    Yu, Jin; van Veen, Edo; Katsnelson, Mikhail I.; Yuan, Shengjun

    2018-06-01

    The electronic properties of monolayer tin dilsulfide (ML -Sn S2 ), a recently synthesized metal dichalcogenide, are studied by a combination of first-principles calculations and tight-binding (TB) approximation. An effective lattice Hamiltonian based on six hybrid s p -like orbitals with trigonal rotation symmetry are proposed to calculate the band structure and density of states for ML -Sn S2 , which demonstrates good quantitative agreement with relativistic density-functional-theory calculations in a wide energy range. We show that the proposed TB model can be easily applied to the case of an external electric field, yielding results consistent with those obtained from full Hamiltonian results. In the presence of a perpendicular magnetic field, highly degenerate equidistant Landau levels are obtained, showing typical two-dimensional electron gas behavior. Thus, the proposed TB model provides a simple way in describing properties in ML -Sn S2 .

  16. Spectral irradiance curve calculations for any type of solar eclipse

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Merrill, J. E.

    1974-01-01

    A simple procedure is described for calculating the eclipse function (EF), alpha, and hence the spectral irradiance curve (SIC), (1-alpha), for any type of solar eclipse: namely, the occultation (partial/total) eclipse and the transit (partial/annular) eclipse. The SIC (or the EF) gives the variation of the amount (or the loss) of solar radiation of a given wavelength reaching a distant observer for various positions of the moon across the sun. The scheme is based on the theory of light curves of eclipsing binaries, the results of which are tabulated in Merrill's Tables, and is valid for all wavelengths for which the solar limb-darkening obeys the cosine law: J = sub c (1 - X + X cost gamma). As an example of computing the SIC for an occultation eclipse which may be total, the calculations for the March 7, 1970, eclipse are described in detail.

  17. The solvation radius of silicate melts based on the solubility of noble gases and scaled particle theory.

    PubMed

    Ottonello, Giulio; Richet, Pascal

    2014-01-28

    The existing solubility data on noble gases in high-temperature silicate melts have been analyzed in terms of Scaling Particle Theory coupled with an ab initio assessment of the electronic, dispersive, and repulsive energy terms based on the Polarized Continuum Model (PCM). After a preliminary analysis of the role of the contracted Gaussian basis sets and theory level in reproducing appropriate static dipole polarizabilities in a vacuum, we have shown that the procedure returns Henry's law constants consistent with the values experimentally observed in water and benzene at T = 25 °C and P = 1 bar for the first four elements of the series. The static dielectric constant (ɛ) of the investigated silicate melts and its optical counterpart (ɛ(∞)) were then resolved through the application of a modified form of the Clausius-Mossotti relation. Argon has been adopted as a probe to depict its high-T solubility in melts through an appropriate choice of the solvent diameter σs, along the guidelines already used in the past for simple media such as water or benzene. The σs obtained was consistent with a simple functional form based on the molecular volume of the solvent. The solubility calculations were then extended to He, Ne, and Kr, whose dispersive and repulsive coefficients are available from theory and we have shown that their ab initio Henry's constants at high T reproduce the observed increase with the static polarizability of the series element with reasonable accuracy. At room temperature (T = 25 °C) the calculated Henry's constants of He, Ne, Ar, and Kr in the various silicate media predict higher solubilities than simple extrapolations (i.e., Arrhenius plots) based on high-T experiments and give rise to smooth trends not appreciably affected by the static polarizabilities of the solutes. The present investigation opens new perspectives on a wider application of PCM theory which can be extended to materials of great industrial interest at the core of metallurgical processes, ceramurgy, and the glass industry.

  18. Dense simple plasmas as high-temperature liquid simple metals

    NASA Technical Reports Server (NTRS)

    Perrot, F.

    1990-01-01

    The thermodynamic properties of dense plasmas considered as high-temperature liquid metals are studied. An attempt is made to show that the neutral pseudoatom picture of liquid simple metals may be extended for describing plasmas in ranges of densities and temperatures where their electronic structure remains 'simple'. The primary features of the model when applied to plasmas include the temperature-dependent self-consistent calculation of the electron charge density and the determination of a density and temperature-dependent ionization state.

  19. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  20. A Simple and Sensitive Method for Auramine O Detection Based on the Binding Interaction with Bovin Serum Albumin.

    PubMed

    Yan, Jingjing; Huang, Xin; Liu, Shaopu; Yang, Jidong; Yuan, Yusheng; Duan, Ruilin; Zhang, Hui; Hu, Xiaoli

    2016-01-01

    A simple, rapid and effective method for auramine O (AO) detection was proposed by fluorescence and UV-Vis absorption spectroscopy. In the BR buffer system (pH 7.0), AO had a strong quenching ability to the fluorescence of bovin serum albumin (BSA) by dynamic quenching. In terms of the thermodynamic parameters calculated as ΔH > 0 and ΔS > 0, the resulting binding of BSA and AO was mainly attributed to the hydrophobic interaction forces. The linearity of this method was in the concentration range from 0.16 to 50 μmol L(-1) with a detection limit of 0.05 μmol L(-1). Based on fluorescence resonance energy transfer (FRET), the distance r (1.36 nm) between donor (BSA) and acceptor (AO) was obtained. Furthermore, the effects of foreign substances and ionic strength were evaluated under the optimum reaction conditions. BSA as a selective probe could be applied to the analysis of AO in medicines with satisfactory results.

  1. The effect of the configuration of a single electrode corona discharge on its acoustic characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, Xinlei; Zhang, Liancheng; Huang, Yifan; Wang, Jin; Liu, Zhen; Yan, Keping

    2017-07-01

    A new sparker system based on pulsed spark discharge with a single electrode has already been utilized for oceanic seismic exploration. However, the electro-acoustic energy efficiency of this system is lower than that of arc discharge based systems. A simple electrode structure was investigated in order to improve the electro-acoustic energy efficiency of the spark discharge. Experiments were carried out on an experimental setup with discharge in water driven by a pulsed power source. The voltage-current waveform, acoustic signal and bubble oscillation were recorded when the relative position of the electrode varied. The electro-acoustic energy efficiency was also calculated. The load voltage had a saltation for the invaginated electrode tip, namely an obvious voltage remnant. The more the electrode tip was invaginated, the larger the pressure peaks and first period became. The results show that electrode recessing into the insulating layer is a simple and effective way to improve the electro-acoustic energy efficiency from 2% to about 4%.

  2. A simple respirogram-based approach for the management of effluent from an activated sludge system.

    PubMed

    Li, Zhi-Hua; Zhu, Yuan-Mo; Yang, Cheng-Jian; Zhang, Tian-Yu; Yu, Han-Qing

    2018-08-01

    Managing wastewater treatment plant (WWTP) based on respirometric analysis is a new and promising field. In this study, a multi-dimensional respirogram space was constructed, and an important index R es/t (ratio of in-situ respiration rate to maximum respiration rate) was derived as an alarm signal for the effluent quality control. A smaller R es/t value suggests better effluent. The critical R' es/t value used for determining whether the effluent meets the regulation depends on operational conditions, which were characterized by temperature and biomass ratio of heterotrophs to autotrophs. With given operational conditions, the critical R' es/t value can be calculated from the respirogram space and effluent conditions required by the discharge regulation, with no requirement for calibration of parameters or any additional measurements. Since it is simple, easy to use, and can be readily implemented online, this approach holds a great promise for applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. The General Formulation and Practical Calculation of the Diffusion Coefficient in a Lattice Containing Cavities; FORMULATION GENERALE ET CALCUL PRATIQUE DU COEFFICIENT DE DIFFUSION DANS UN RESEAU COMPORTANT DES CAVITES (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benoist, P.

    The calculation of diffusion coefficients in a lattice necessitates the knowledge of a correct method of weighting the free paths of the different constituents. An unambiguous definition of this weighting method is given here, based on the calculation of leakages from a zone of a reactor. The formulation obtained, which is both simple and general, reduces the calculation of diffusion coefficients to that of collision probabilities in the different media; it reveals in the expression for the radial coefficient the series of the terms of angular correlation (cross terms) recently shown by several authors. This formulation is then used tomore » calculate the practical case of a classical type of lattice composed of a moderator and a fuel element surrounded by an empty space. Analytical and numerical comparison of the expressions obtained with those inferred from the theory of BEHRENS shows up the importance of several new terms some of which are linked with the transparency of the fuel element. Cross terms up to the second order are evaluated. A practical formulary is given at the end of the paper. (author) [French] Le calcul des coefficients de diffusion dans un reseau suppose la connaissance d'un mode de ponderation correct des libres parcours des differents constituants. On definit ici sans ambiguite ce mode de ponderation a partir du calcul des fuites hors d'une zone de reacteur. La formulation obtenue, simple et generale, ramene le calcul des coefficients de diffusion a celui des probabilites de collision dans les differents milieux; elle fait apparaitre dans l'expression du coefficient radial la serie des termes de correlation angulaire (termes rectangles), mis en evidence recemment par plusieurs auteurs. Cette formulation est ensuite appliquee au calcul pratique d'un reseau classique, compose d'un moderateur et d'un element combustible entoure d'une cavite; la comparaison analytique et numerique des expressions obtenues avec celles deduites de la theorie de BEHRENS fait apparaitre l'importance de plusieurs termes nouveaux, dont certains sont lies a la transparence de l'element combustible; les termes rectangles sont calcules jusqu'a l'ordre 2. Un formulaire pratique est donne a la fin de cette etude. (auteur)« less

  4. Factors affecting the estimate of primary production from space

    NASA Technical Reports Server (NTRS)

    Balch, W. M.; Byrne, C. F.

    1994-01-01

    Remote sensing of primary production in the euphotic zone has been based mostly on visible-band and water-leaving radiance measured with the coastal zone color scanner. There are some robust, simple relationships for calculating integral production based on surface measurements, but they also require knowledge for photoadaptive parameters such as maximum photosynthesis which currently cannot be obtained from spave. A 17,000-station data set is used to show that space-based estimates of maximum photosynthesis could improve predictions of psi, the water column light utiliztion index, which is an important term in many primary productivity models. Temperature is also examined as a factor for predicting hydrographic structure and primary production. A simple model is used to relate temperature and maximum photosynthesis; the model incorporates (1) the positive relationship between maximum photosynthesis and temperature and (2) the strongly negative relationship between temperature and nitrate in the ocean (which directly affects maximum growth rates via nitrogen limitation). Since these two factors relate to carbon and nitrogen, 'balanced carbon/nitrogen assimilation' was calculated using the Redfield ratio, It is expected that the relationship between maximum balanced carbon assimilation versus temperature is concave-down, with the peak dependent on nitrate uptake kinetics, temperature-nitrate relationships,a nd the carbon chlorophyll ration. These predictions were compared with the sea truth data. The minimum turnover time for nitrate was also calculated using this approach. Lastly, sea surface temperature gradients were used to predict the slope of isotherms (a proxy for the slope of isopycnals in many waters). Sea truth data show that at size scales of several hundred kilometers, surface temperature gradients can provide information on the slope of isotherms in the top 200 m of the water column. This is directly relevant to the supply of nutrients into the surface mixed layer, which is useful for predicting integral biomass and primary production.

  5. Grading apical vertebral rotation without a computed tomography scan: a clinically relevant system based on the radiographic appearance of bilateral pedicle screws.

    PubMed

    Upasani, Vidyadhar V; Chambers, Reid C; Dalal, Ali H; Shah, Suken A; Lehman, Ronald A; Newton, Peter O

    2009-08-01

    Bench-top and retrospective analysis to assess vertebral rotation based on the appearance of bilateral pedicle screws in patients with adolescent idiopathic scoliosis (AIS). To develop a clinically relevant radiographic grading system for evaluating postoperative thoracic apical vertebral rotation that would correlate with computed tomography (CT) measures of rotation. The 3-column vertebral body control provided by bilateral pedicle screws has enabled scoliosis surgeons to develop advanced techniques of direct vertebral derotation. Our ability to accurately quantify spinal deformity in the axial plane, however, continues to be limited. Trigonometry was used to define the relationship between the position of bilateral pedicle screws and vertebral rotation. This relationship was validated using digital photographs of a bench-top model. The mathematical relationships were then used to calculate vertebral rotation from standing postoperative, posteroanterior radiographs in AIS patients and correlated with postoperative CT measures of rotation. Fourteen digital photographs of the bench-top model were independently analyzed twice by 3 coauthors. The mathematically calculated degree of rotation was found to correlate significantly with the actual degree of rotation (r = 0.99; P < 0.001) and the intra- and interobserver reliability for these measurements were both excellent (kappa = 0.98 and kappa = 0.97, respectively). In the retrospective analysis of 17 AIS patients, the average absolute difference between the radiographic measurement of rotation and the CT measure was only 1.9 degrees +/- 2.0 degrees (r = 0.92; P < 0.001). Based on these correlations a simple radiographic grading system for postoperative apical vertebral rotation was developed. An accurate assessment of vertebral rotation can be performed radiographically, using screw lengths and screw tip-to-rod distances of bilateral segmental pedicle screws and a trigonometric calculation. These data support the use of a simple radiographic grading system to approximate apical vertebral rotation in AIS patients treated with bilateral apical pedicle screws.

  6. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  7. Active Brownian particles near straight or curved walls: Pressure and boundary layers

    NASA Astrophysics Data System (ADS)

    Duzgun, Ayhan; Selinger, Jonathan V.

    2018-03-01

    Unlike equilibrium systems, active matter is not governed by the conventional laws of thermodynamics. Through a series of analytic calculations and Langevin dynamics simulations, we explore how systems cross over from equilibrium to active behavior as the activity is increased. In particular, we calculate the profiles of density and orientational order near straight or circular walls and show the characteristic width of the boundary layers. We find a simple relationship between the enhancements of density and pressure near a wall. Based on these results, we determine how the pressure depends on wall curvature and hence make approximate analytic predictions for the motion of curved tracers, as well as the rectification of active particles around small openings in confined geometries.

  8. Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron

    NASA Astrophysics Data System (ADS)

    Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.

    2010-07-01

    Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.

  9. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  10. IOL calculation using paraxial matrix optics.

    PubMed

    Haigis, Wolfgang

    2009-07-01

    Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.

  11. The computation of lipophilicities of ⁶⁴Cu PET systems based on a novel approach for fluctuating charges.

    PubMed

    Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger

    2013-08-21

    A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.

  12. A Quantum Chemical and Statistical Study of Phenolic Schiff Bases with Antioxidant Activity against DPPH Free Radical

    PubMed Central

    Anouar, El Hassane

    2014-01-01

    Phenolic Schiff bases are known as powerful antioxidants. To select the electronic, 2D and 3D descriptors responsible for the free radical scavenging ability of a series of 30 phenolic Schiff bases, a set of molecular descriptors were calculated by using B3P86 (Becke’s three parameter hybrid functional with Perdew 86 correlation functional) combined with 6-31 + G(d,p) basis set (i.e., at the B3P86/6-31 + G(d,p) level of theory). The chemometric methods, simple and multiple linear regressions (SLR and MLR), principal component analysis (PCA) and hierarchical cluster analysis (HCA) were employed to reduce the dimensionality and to investigate the relationship between the calculated descriptors and the antioxidant activity. The results showed that the antioxidant activity mainly depends on the first and second bond dissociation enthalpies of phenolic hydroxyl groups, the dipole moment and the hydrophobicity descriptors. The antioxidant activity is inversely proportional to the main descriptors. The selected descriptors discriminate the Schiff bases into active and inactive antioxidants. PMID:26784873

  13. A simple kinetic model of a Ne-H2 Penning-plasma laser

    NASA Astrophysics Data System (ADS)

    Petrov, G. M.; Stefanova, M. S.; Pramatarov, P. M.

    1995-09-01

    A simple kinetic model of the Ne-H2 Penning-Plasma Laser (PPL) (NeI 585.3 nm) is proposed. The negative glow of a hollow cathode discharge at intermediate pressures is considered as the active medium. The balance equations for the upper and lower laser levels, electrons, ions and electron energy are solved. The dependences of the laser gain on the discharge conditions (Ne and H2 partial pressures, discharge current) are calculated and measured. The calculated values are in a good agreement with the experimental data.

  14. A Simple Formula to Calculate Shallow-Water Transmission Loss by Means of a Least-Squares Surface Fit Technique.

    DTIC Science & Technology

    1980-09-01

    HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the

  15. Correlation between cystatin C-based formulas, Schwartz formula and urinary creatinine clearance for glomerular filtration rate estimation in children with kidney disease.

    PubMed

    Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh

    2016-01-01

    Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.

  16. Room temperature ionic liquids: A simple model. Effect of chain length and size of intermolecular potential on critical temperature.

    PubMed

    Chapela, Gustavo A; Guzmán, Orlando; Díaz-Herrera, Enrique; del Río, Fernando

    2015-04-21

    A model of a room temperature ionic liquid can be represented as an ion attached to an aliphatic chain mixed with a counter ion. The simple model used in this work is based on a short rigid tangent square well chain with an ion, represented by a hard sphere interacting with a Yukawa potential at the head of the chain, mixed with a counter ion represented as well by a hard sphere interacting with a Yukawa potential of the opposite sign. The length of the chain and the depth of the intermolecular forces are investigated in order to understand which of these factors are responsible for the lowering of the critical temperature. It is the large difference between the ionic and the dispersion potentials which explains this lowering of the critical temperature. Calculation of liquid-vapor equilibrium orthobaric curves is used to estimate the critical points of the model. Vapor pressures are used to obtain an estimate of the triple point of the different models in order to calculate the span of temperatures where they remain a liquid. Surface tensions and interfacial thicknesses are also reported.

  17. A NEW METHOD FOR ENVIRONMENTAL FLOW ASSESSMENT BASED ON BASIN GEOLOGY. APPLICATION TO EBRO BASIN.

    PubMed

    2018-02-01

    The determination of environmental flows is one of the commonest practical actions implemented on European rivers to promote their good ecological status. In Mediterranean rivers, groundwater inflows are a decisive factor in streamflow maintenance. This work examines the relationship between the lithological composition of the Ebro basin (Spain) and dry season flows in order to establish a model that can assist in the calculation of environmental flow rates.Due to the lack of information on the hydrogeological characteristics of the studied basin, the variable representing groundwater inflows has been estimated in a very simple way. The explanatory variable used in the proposed model is easy to calculate and is sufficiently powerful to take into account all the required characteristics.The model has a high coefficient of determination, indicating that it is accurate for the intended purpose. The advantage of this method compared to other methods is that it requires very little data and provides a simple estimate of environmental flow. It is also independent of the basin area and the river section order.The results of this research also contribute to knowledge of the variables that influence low flow periods and low flow rates on rivers in the Ebro basin.

  18. Size and Shape of Protein Molecules at the Nanometer Level Determined by Sedimentation, Gel Filtration, and Electron Microscopy

    PubMed Central

    2009-01-01

    An important part of characterizing any protein molecule is to determine its size and shape. Sedimentation and gel filtration are hydrodynamic techniques that can be used for this medium resolution structural analysis. This review collects a number of simple calculations that are useful for thinking about protein structure at the nanometer level. Readers are reminded that the Perrin equation is generally not a valid approach to determine the shape of proteins. Instead, a simple guideline is presented, based on the measured sedimentation coefficient and a calculated maximum S, to estimate if a protein is globular or elongated. It is recalled that a gel filtration column fractionates proteins on the basis of their Stokes radius, not molecular weight. The molecular weight can be determined by combining gradient sedimentation and gel filtration, techniques available in most biochemistry laboratories, as originally proposed by Siegel and Monte. Finally, rotary shadowing and negative stain electron microscopy are powerful techniques for resolving the size and shape of single protein molecules and complexes at the nanometer level. A combination of hydrodynamics and electron microscopy is especially powerful. PMID:19495910

  19. Simple calculator to estimate the medical cost of diabetes in sub-Saharan Africa

    PubMed Central

    Alouki, Koffi; Delisle, Hélène; Besançon, Stéphane; Baldé, Naby; Sidibé-Traoré, Assa; Drabo, Joseph; Djrolo, François; Mbanya, Jean-Claude; Halimi, Serge

    2015-01-01

    AIM: To design a medical cost calculator and show that diabetes care is beyond reach of the majority particularly patients with complications. METHODS: Out-of-pocket expenditures of patients for medical treatment of type-2 diabetes were estimated based on price data collected in Benin, Burkina Faso, Guinea and Mali. A detailed protocol for realistic medical care of diabetes and its complications in the African context was defined. Care components were based on existing guidelines, published data and clinical experience. Prices were obtained in public and private health facilities. The cost calculator used Excel. The cost for basic management of uncomplicated diabetes was calculated per person and per year. Incremental costs were also computed per annum for chronic complications and per episode for acute complications. RESULTS: Wide variations of estimated care costs were observed among countries and between the public and private healthcare system. The minimum estimated cost for the treatment of uncomplicated diabetes (in the public sector) would amount to 21%-34% of the country’s gross national income per capita, 26%-47% in the presence of retinopathy, and above 70% for nephropathy, the most expensive complication. CONCLUSION: The study provided objective evidence for the exorbitant medical cost of diabetes considering that no medical insurance is available in the study countries. Although the calculator only estimates the cost of inaction, it is innovative and of interest for several stakeholders. PMID:26617974

  20. Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core

    NASA Astrophysics Data System (ADS)

    Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.

    2017-01-01

    The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.

  1. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  2. Assessment of Growth Problems in Adolescents

    PubMed Central

    Baker, F.W.

    1986-01-01

    Investigation of an adolescent growth problem consists of taking an adequate history and doing a complete physical examination. This procedure, along with a calculation of bone age and height/weight age, will allow family physicians to decide on the cause of the growth variance in most patients. Relatively simple studies can be done in the family physician's office to delineate the major causes of growth problems; the majority will be unrelated to the endocrine system. Further studies may be needed in a hospital-based setting. PMID:21267222

  3. First-principles study of point defects at a semicoherent interface

    DOE PAGES

    Metsanurk, E.; Tamm, A.; Caro, A.; ...

    2014-12-19

    Most of the atomistic modeling of semicoherent metal-metal interfaces has so far been based on the use of semiempirical interatomic potentials. Here, we show that key conclusions drawn from previous studies are in contradiction with more precise ab-initio calculations. In particular we find that single point defects do not delocalize, but remain compact near the interfacial plane in Cu-Nb multilayers. Lastly, we give a simple qualitative explanation for this difference on the basis of the well known limited transferability of empirical potentials.

  4. Generalized Tavis-Cummings models and quantum networks

    NASA Astrophysics Data System (ADS)

    Gorokhov, A. V.

    2018-04-01

    The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.

  5. BPERM version 3.0: A 2-D wakepotential/impedance code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barts, T.; Chou, W.

    1996-10-01

    BPERM 3.0 is an improved version of a previous release. The main purpose of this version is to make it more user friendly. Following a simple 1-2-3 procedure, one obtains both text and graphical output of the wakepotential and impedance for a given geometry. The calculation is based on a boundary perturbation method, which is significantly faster than numerical simulations. It is accurate when the discontinuities are small. In particular, it works well for tapered structures. 5 refs., 3 figs.

  6. Dynamic stability of maglev systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Chen, S.S.; Mulcahy, T.M.

    1994-05-01

    Because dynamic instabilities are not acceptable in any commercial maglev system, it is important to consider dynamic instability in the development of all maglev systems. This study considers the stability of maglev systems based on experimental data, scoping calculations, and simple mathematical models. Divergence and flutter are obtained for coupled vibration of a three-degree-of-freedom maglev vehicle on a guideway consisting of double L-shaped aluminum segments. The theory and analysis developed in this study provides basic stability characteristics and identifies future research needs for maglev systems.

  7. Bulk strain solitons as a tool for determination of the third order elastic moduli of composite materials

    NASA Astrophysics Data System (ADS)

    Semenova, I. V.; Belashov, A. V.; Garbuzov, F. E.; Samsonov, A. M.; Semenov, A. A.

    2017-06-01

    We demonstrate an alternative approach to determination of the third order elastic moduli of materials based on registration of nonlinear bulk strain waves in three basic structural waveguides (rod, plate and shell) and further calculation of the Murnaghan moduli from the recorded wave parameters via simple algebra. These elastic moduli are available in literature for a limited number of materials and are measured with considerable errors, that evidences a demand in novel approaches to their determination.

  8. Engineering topological edge states in two dimensional magnetic photonic crystal

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Wu, Tong; Zhang, Xiangdong

    2017-01-01

    Based on a perturbative approach, we propose a simple and efficient method to engineer the topological edge states in two dimensional magnetic photonic crystals. The topological edge states in the microstructures can be constructed and varied by altering the parameters of the microstructure according to the field-energy distributions of the Bloch states at the related Bloch wave vectors. The validity of the proposed method has been demonstrated by exact numerical calculations through three concrete examples. Our method makes the topological edge states "designable."

  9. Measuring Viscosities of Gases at Atmospheric Pressure

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini

    1987-01-01

    Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.

  10. Low loss fusion splicing polarization-maintaining photonic crystal fiber and conventional polarization-maintaining fiber

    NASA Astrophysics Data System (ADS)

    Zuoming, Sun; Ningfang, Song; Jing, Jin; Jingming, Song; Pan, Ma

    2012-12-01

    An efficient and simple method of fusion splicing of a Polarization-Maintaining Photonic Crystal Fiber (PM-PCF) and a conventional Polarization-Maintaining Fiber (PMF) with a low loss of 0.65 dB in experiment is reported. The minimum bending diameter of the joint can reach 2 cm. Theoretical calculation of the splicing loss based on mode field diameters (MFDs) mismatch of the two kinds of fibers is given. All parameters affected the splicing loss were studied.

  11. Time dependent density functional calculation of plasmon response in clusters

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhang, Feng-Shou; Eric, Suraud

    2003-02-01

    We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

  12. Figure of merit studies of beam power concepts for advanced space exploration

    NASA Technical Reports Server (NTRS)

    Miller, Gabriel; Kadiramangalam, Murali N.

    1990-01-01

    Surface to surface, millimeter wavelength beam power systems for power transmission on the lunar base were investigated. Qualitative/quantitative analyses and technology assessment of 35, 110 and 140 GHz beam power systems were conducted. System characteristics including mass, stowage volume, cost and efficiency as a function of range and power level were calculated. A simple figure of merit analysis indicates that the 35 GHz system would be the preferred choice for lunar base applications, followed closely by the 110 GHz system. System parameters of a 35 GHz beam power system appropriate for power transmission on a recent lunar base concept studied by NASA-Johnson and the necessary deployment sequence are suggested.

  13. METALLURGICAL PROGRAMS: CALCULATION OF MASS FROM VOLUME, DENSITY OF MIXTURES, AND CONVERSION OF ATOMIC TO WEIGHT PERCENT

    NASA Technical Reports Server (NTRS)

    Degroh, H.

    1994-01-01

    The Metallurgical Programs include three simple programs which calculate solutions to problems common to metallurgical engineers and persons making metal castings. The first program calculates the mass of a binary ideal (alloy) given the weight fractions and densities of the pure components and the total volume. The second program calculates the densities of a binary ideal mixture. The third program converts the atomic percentages of a binary mixture to weight percentages. The programs use simple equations to assist the materials staff with routine calculations. The Metallurgical Programs are written in Microsoft QuickBASIC for interactive execution and have been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. All instructions needed by the user appear as prompts as the software is used. Data is input using the keyboard only and output is via the monitor. The Metallurgical programs were written in 1987.

  14. Hirarchical emotion calculation model for virtual human modellin - biomed 2010.

    PubMed

    Zhao, Yue; Wright, David

    2010-01-01

    This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.

  15. Piezoelectric coupling factor calculations for plates of langatate driven in simple thickness modes by lateral-field-excitation.

    PubMed

    Khan, Ajmal; Ballato, Arthur

    2002-07-01

    Piezoelectric coupling factors for langatate (La3Ga5.5Ta0.5O14) single-crystals driven by lateral-field-excitation have been calculated using the extended Christoffel-Bechmann method. Calculations were made using published materials constants. The results are presented in terms of the lateral piezoelectric coupling factor as functions of in-plane (azimuthal) rotation angle for the three simple thickness vibration modes of some non-rotated, singly-rotated, and doubly-rotated orientations. It is shown that lateral-field-excitation offers the potential to eliminate unwanted vibration modes and to achieve considerably greater piezoelectric coupling versus thickness-field-excitation for the rotated cuts considered and for a doubly-rotated cut that is of potential technological interest.

  16. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  17. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    ERIC Educational Resources Information Center

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  18. A Simple Formula for Quantiles on the TI-82/83 Calculator.

    ERIC Educational Resources Information Center

    Eisner, Milton P.

    1997-01-01

    The concept of percentile is a fundamental part of every course in basic statistics. Many such courses are now taught to students and require them to have TI-82 or TI-83 calculators. The functions defined in these calculators enable students to easily find the percentiles of a discrete data set. (PVD)

  19. Sensitivity calculations for iteratively solved problems

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1985-01-01

    The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.

  20. eQuilibrator--the biochemical thermodynamics calculator.

    PubMed

    Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron

    2012-01-01

    The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.

  1. eQuilibrator—the biochemical thermodynamics calculator

    PubMed Central

    Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron

    2012-01-01

    The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like ‘how much Gibbs energy is released by ATP hydrolysis at pH 5?’ are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use. PMID:22064852

  2. A Simple and Reliable Setup for Monitoring Corrosion Rate of Steel Rebars in Concrete

    PubMed Central

    Jibran, Mohammed Abdul Azeem; Azad, Abul Kalam

    2014-01-01

    The accuracy in the measurement of the rate of corrosion of steel in concrete depends on many factors. The high resistivity of concrete makes the polarization data erroneous due to the Ohmic drop. The other source of error is the use of an arbitrarily assumed value of the Stern-Geary constant for calculating corrosion current density. This paper presents the outcomes of a research work conducted to develop a reliable and low-cost experimental setup and a simple calculation procedure that can be utilised to calculate the corrosion current density considering the Ohmic drop compensation and the actual value of the Stern-Geary constants calculated using the polarization data. The measurements conducted on specimens corroded to different levels indicate the usefulness of the developed setup to determine the corrosion current density with and without Ohmic drop compensation. PMID:24526907

  3. A Simple Approach to the Landau-Zener Formula

    ERIC Educational Resources Information Center

    Vutha, Amar C.

    2010-01-01

    The Landau-Zener formula provides the probability of non-adiabatic transitions occurring when two energy levels are swept through an avoided crossing. The formula is derived here in a simple calculation that emphasizes the physics responsible for non-adiabatic population transfer. (Contains 2 figures.)

  4. SIMPLE METHOD FOR THE REPRESENTATION, QUANTIFICATION, AND COMPARISON OF THE VOLUMES AND SHAPES OF CHEMICAL COMPOUNDS

    EPA Science Inventory

    A conceptually and computationally simple method for the definition, display, quantification, and comparison of the shapes of three-dimensional mathematical molecular models is presented. Molecular or solvent-accessible volume and surface area can also be calculated. Algorithms, ...

  5. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  6. NMR Shielding in Metals Using the Augmented Plane Wave Method

    PubMed Central

    2015-01-01

    We present calculations of solid state NMR magnetic shielding in metals, which includes both the orbital and the complete spin response of the system in a consistent way. The latter contains an induced spin-polarization of the core states and needs an all-electron self-consistent treatment. In particular, for transition metals, the spin hyperfine field originates not only from the polarization of the valence s-electrons, but the induced magnetic moment of the d-electrons polarizes the core s-states in opposite direction. The method is based on DFT and the augmented plane wave approach as implemented in the WIEN2k code. A comparison between calculated and measured NMR shifts indicates that first-principle calculations can obtain converged results and are more reliable than initially concluded based on previous publications. Nevertheless large k-meshes (up to 2 000 000 k-points in the full Brillouin-zone) and some Fermi-broadening are necessary. Our results show that, in general, both spin and orbital components of the NMR shielding must be evaluated in order to reproduce experimental shifts, because the orbital part cancels the shift of the usually highly ionic reference compound only for simple sp-elements but not for transition metals. This development paves the way for routine NMR calculations of metallic systems. PMID:26322148

  7. Look Before You Leap: What Are the Obstacles to Risk Calculation in the Equestrian Sport of Eventing?

    PubMed Central

    O’Brien, Denzil

    2016-01-01

    Simple Summary This paper examines a number of methods for calculating injury risk for riders in the equestrian sport of eventing, and suggests that the primary locus of risk is the action of the horse jumping, and the jump itself. The paper argues that risk calculation should therefore focus first on this locus. Abstract All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps. PMID:26891334

  8. A Simple Method to Estimate Photosynthetic Radiation Use Efficiency of Canopies

    PubMed Central

    ROSATI, A.; METCALF, S. G.; LAMPINEN, B. D.

    2004-01-01

    • Background and Aims Photosynthetic radiation use efficiency (PhRUE) over the course of a day has been shown to be constant for leaves throughout a general canopy where nitrogen content (and thus photosynthetic properties) of leaves is distributed in relation to the light gradient. It has been suggested that this daily PhRUE can be calculated simply from the photosynthetic properties of a leaf at the top of the canopy and from the PAR incident on the canopy, which can be obtained from weather‐station data. The objective of this study was to investigate whether this simple method allows estimation of PhRUE of different crops and with different daily incident PAR, and also during the growing season. • Methods The PhRUE calculated with this simple method was compared with that calculated with a more detailed model, for different days in May, June and July in California, on almond (Prunus dulcis) and walnut (Juglans regia) trees. Daily net photosynthesis of 50 individual leaves was calculated as the daylight integral of the instantaneous photosynthesis. The latter was estimated for each leaf from its photosynthetic response to PAR and from the PAR incident on the leaf during the day. • Key Results Daily photosynthesis of individual leaves of both species was linearly related to the daily PAR incident on the leaves (which implies constant PhRUE throughout the canopy), but the slope (i.e. the PhRUE) differed between the species, over the growing season due to changes in photosynthetic properties of the leaves, and with differences in daily incident PAR. When PhRUE was estimated from the photosynthetic light response curve of a leaf at the top of the canopy and from the incident radiation above the canopy, obtained from weather‐station data, the values were within 5 % of those calculated with the more detailed model, except in five out of 34 cases. • Conclusions The simple method of estimating PhRUE is valuable as it simplifies calculation of canopy photosynthesis to a multiplication between the PAR intercepted by the canopy, which can be obtained with remote sensing, and the PhRUE calculated from incident PAR, obtained from standard weather‐station data, and from the photosynthetic properties of leaves at the top of the canopy. The latter properties are the sole crop parameters needed. While being simple, this method describes the differences in PhRUE related to crop, season, nutrient status and daily incident PAR. PMID:15044212

  9. The American Heart Association Life's Simple 7 and Incident Cognitive Impairment: The REasons for Geographic And Racial Differences in Stroke (REGARDS) Study

    PubMed Central

    Thacker, Evan L.; Gillett, Sarah R.; Wadley, Virginia G.; Unverzagt, Frederick W.; Judd, Suzanne E.; McClure, Leslie A.; Howard, Virginia J.; Cushman, Mary

    2014-01-01

    Background Life's Simple 7 is a new metric based on modifiable health behaviors and factors that the American Heart Association uses to promote improvements to cardiovascular health (CVH). We hypothesized that better Life's Simple 7 scores are associated with lower incidence of cognitive impairment. Methods and Results For this prospective cohort study, we included REasons for Geographic And Racial Differences in Stroke (REGARDS) participants aged 45+ who had normal global cognitive status at baseline and no history of stroke (N=17 761). We calculated baseline Life's Simple 7 score (range, 0 to 14) based on smoking, diet, physical activity, body mass index, blood pressure, total cholesterol, and fasting glucose. We identified incident cognitive impairment using a 3‐test measure of verbal learning, memory, and fluency obtained a mean of 4 years after baseline. Relative to the lowest tertile of Life's Simple 7 score (0 to 6 points), odds ratios of incident cognitive impairment were 0.65 (0.52, 0.81) in the middle tertile (7 to 8 points) and 0.63 (0.51, 0.79) in the highest tertile (9 to 14 points). The association was similar in blacks and whites, as well as outside and within the Southeastern stroke belt region of the United States. Conclusions Compared with low CVH, intermediate and high CVH were both associated with substantially lower incidence of cognitive impairment. We did not observe a dose‐response pattern; people with intermediate and high levels of CVH had similar incidence of cognitive impairment. This suggests that even when high CVH is not achieved, intermediate levels of CVH are preferable to low CVH. PMID:24919926

  10. A Recursive Method for Calculating Certain Partition Functions.

    ERIC Educational Resources Information Center

    Woodrum, Luther; And Others

    1978-01-01

    Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)

  11. Simple nonlinear modelling of earthquake response in torsionally coupled R/C structures: A preliminary study

    NASA Astrophysics Data System (ADS)

    Saiidi, M.

    1982-07-01

    The equivalent of a single degree of freedom (SDOF) nonlinear model, the Q-model-13, was examined. The study intended to: (1) determine the seismic response of a torsionally coupled building based on the multidegree of freedom (MDOF) and (SDOF) nonlinear models; and (2) develop a simple SDOF nonlinear model to calculate displacement history of structures with eccentric centers of mass and stiffness. It is shown that planar models are able to yield qualitative estimates of the response of the building. The model is used to estimate the response of a hypothetical six-story frame wall reinforced concrete building with torsional coupling, using two different earthquake intensities. It is shown that the Q-Model-13 can lead to a satisfactory estimate of the response of the structure in both cases.

  12. Simple circular odor chart for characterization of trace amounts of odorants discharged from thirteen odor sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoshika, Y.; Nihei, Y.; Muto, G.

    1981-04-01

    A simple circular odor chart is proposed for the explanation of the relationship between sensory responses (to odor quality and intensity) to odors and chemical analysis data of the odorants responsible for each odor discharged from thirteen odor sources. The odorants were classified into eight odorant groups and were analyzed by a systematic gas chromatographic (GC) technique. The characterization of the trace amounts of the odorants was carried out by using the values of a new proposed unit (pOU) based on the ratio of detected concentration to recognition threshold value. The calculated pOU values of the eight groups were plottedmore » in circular charts. It was found that the shape and size of each circular odor chart represent the quality and the intensity of each odor.« less

  13. A simple method to calculate first-passage time densities with arbitrary initial conditions

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig

    2016-06-01

    Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.

  14. Chemical consequences of the initial diffusional growth of cloud droplets - A clean marine case

    NASA Technical Reports Server (NTRS)

    Twohy, C. H.; Charlson, R. J.; Austin, P. H.

    1989-01-01

    A simple microphysical cloud parcel model and a simple representation of the background marine aerosol are used to predict the concentrations and compositions of droplets of various sizes near cloud base. The aerosol consists of an externally-mixed ammonium bisulfate accumulation mode and a sea-salt coarse particle mode. The difference in diffusional growth rates between the small and large droplets as well as the differences in composition between the two aerosol modes result in substantial differences in solute concentration and composition with size of droplets in the parcel. The chemistry of individual droplets is not, in general, representative of the bulk (volume-weighted mean) cloud water sample. These differences, calculated to occur early in the parcel's lifetime, should have important consequences for chemical reactions such as aqueous phase sulfate production.

  15. Simple economic evaluation and applications experiments for photovoltaic systems for remote sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, M. Jr.

    1980-01-01

    A simple evaluation of the cost effectiveness of photovoltaic systems is presented. The evaluation is based on a calculation of breakeven costs of photovoltaics (PV) arrays with the levelized costs of two alternative energy sources (1) extension of the utility grid and (2) diesel generators. A selected number of PV applications experiments that are in progress in remote areas of the US are summarized. These applications experiments range from a 23 watt insect survey trap to a 100 kW PV system for a national park complex. It is concluded that PV systems for remote areas are now cost effective inmore » remote small applications with commercially available technology and will be cost competitive for intermediate scale systems (approx. 10 kW) in the 1980s if the DOE 1986 Commercial Readiness Goals are achieved.« less

  16. A simple procedure for the estimation of neutron skyshine from proton accelerators.

    PubMed

    Stevenson, G R; Thomas, R H

    1984-01-01

    Recent calculations of neutron diffusion at an air/ground interface have enabled the establishment of a very simple procedure for estimating neutron dose equivalent at large distances from proton accelerators in the energy range 10 MeV to several tens of GeV.

  17. Calculation of the electric field resulting from human body rotation in a magnetic field

    NASA Astrophysics Data System (ADS)

    Cobos Sánchez, Clemente; Glover, Paul; Power, Henry; Bowtell, Richard

    2012-08-01

    A number of recent studies have shown that the electric field and current density induced in the human body by movement in and around magnetic resonance imaging installations can exceed regulatory levels. Although it is possible to measure the induced electric fields at the surface of the body, it is usually more convenient to use numerical models to predict likely exposure under well-defined movement conditions. Whilst the accuracy of these models is not in doubt, this paper shows that modelling of particular rotational movements should be treated with care. In particular, we show that v  ×  B rather than -(v  ·  ∇)A should be used as the driving term in potential-based modelling of induced fields. Although for translational motion the two driving terms are equivalent, specific examples of rotational rigid-body motion are given where incorrect results are obtained when -(v  ·  ∇)A is employed. In addition, we show that it is important to take into account the space charge which can be generated by rotations and we also consider particular cases where neglecting the space charge generates erroneous results. Along with analytic calculations based on simple models, boundary-element-based numerical calculations are used to illustrate these findings.

  18. An automatic alignment system for measuring optical path of transmissometer based on light beam scanning

    NASA Astrophysics Data System (ADS)

    Zhou, Shudao; Ma, Zhongliang; Wang, Min; Peng, Shuling

    2018-05-01

    This paper proposes a novel alignment system based on the measurement of optical path using a light beam scanning mode in a transmissometer. The system controls both the probe beam and the receiving field of view while scanning in two vertical directions. The system then calculates the azimuth angle of the transmitter and the receiver to determine the precise alignment of the optical path. Experiments show that this method can determine the alignment angles in less than 10 min with errors smaller than 66 μrad in the azimuth. This system also features high collimation precision, process automation and simple installation.

  19. Model for large magnetoresistance effect in p–n junctions

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Yang, Dezheng; Si, Mingsu; Shi, Huigang; Xue, Desheng

    2018-06-01

    We present a simple model based on the classic Shockley model to explain the magnetotransport in nonmagnetic p–n junctions. Under a magnetic field, the evaluation of the carrier to compensate Lorentz force establishes the necessary space-charge region distribution. The calculated current–voltage (I–V) characteristics under various magnetic fields demonstrate that the conventional nonmagnetic p–n junction can exhibit an extremely large magnetoresistance effect, which is even larger than that in magnetic materials. Because the large magnetoresistance effect that we discussed is based on the conventional p–n junction device, our model provides new insight into the development of semiconductor magnetoelectronics.

  20. Self-match based on polling scheme for passive optical network monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  1. Predicting the stability of nanodevices

    NASA Astrophysics Data System (ADS)

    Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.

    2011-05-01

    A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.

  2. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    PubMed

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Aging Wire Insulation Assessment by Phase Spectrum Examination of Ultrasonic Guided Waves

    NASA Technical Reports Server (NTRS)

    Anastasi, Robert F.; Madaras, Eric I.

    2003-01-01

    Wire integrity has become an area of concern to the aerospace community including DoD, NASA, FAA, and Industry. Over time and changing environmental conditions, wire insulation can become brittle and crack. The cracks expose the wire conductor and can be a source of equipment failure, short circuits, smoke, and fire. The technique of using the ultrasonic phase spectrum to extract material properties of the insulation is being examined. Ultrasonic guided waves will propagate in both the wire conductor and insulation. Assuming the condition of the conductor remains constant then the stiffness of the insulator can be determined by measuring the ultrasonic guided wave velocity. In the phase spectrum method the guided wave velocity is obtained by transforming the time base waveform to the frequency domain and taking the phase difference between two waveforms. The result can then be correlated with a database, derived by numerical model calculations, to extract material properties of the wire insulator. Initial laboratory tests were performed on a simple model consisting of a solid cylinder and then a solid cylinder with a polymer coating. For each sample the flexural mode waveform was identified. That waveform was then transformed to the frequency domain and a phase spectrum was calculated from a pair of waveforms. Experimental results on the simple model compared well to numerical calculations. Further tests were conducted on aircraft or mil-spec wire samples, to see if changes in wire insulation stiffness can be extracted using the phase spectrum technique.

  4. Payback as an investment criterion for sawmill improvement projects

    Treesearch

    G. B. Harpole

    1983-01-01

    Methods other than presented here should be used to assess projects for likely return on investment; but, payback is simple to calculate and can be used for calculations that will indicate the relative attractiveness of alternative improvement projects. This paper illustrates how payback ratios are calculated, how they can be used to rank alternative improvement...

  5. Simplified Calculation Of Solar Fluxes In Solar Receivers

    NASA Technical Reports Server (NTRS)

    Bhandari, Pradeep

    1990-01-01

    Simplified Calculation of Solar Flux Distribution on Side Wall of Cylindrical Cavity Solar Receivers computer program employs simple solar-flux-calculation algorithm for cylindrical-cavity-type solar receiver. Results compare favorably with those of more complicated programs. Applications include study of solar energy and transfer of heat, and space power/solar-dynamics engineering. Written in FORTRAN 77.

  6. TK Modeler version 1.0, a Microsoft® Excel®-based modeling software for the prediction of diurnal blood/plasma concentration for toxicokinetic use.

    PubMed

    McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A

    2012-07-01

    TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. A framework for standardized calculation of weather indices in Germany

    NASA Astrophysics Data System (ADS)

    Möller, Markus; Doms, Juliane; Gerstmann, Henning; Feike, Til

    2018-05-01

    Climate change has been recognized as a main driver in the increasing occurrence of extreme weather. Weather indices (WIs) are used to assess extreme weather conditions regarding its impact on crop yields. Designing WIs is challenging, since complex and dynamic crop-climate relationships have to be considered. As a consequence, geodata for WI calculations have to represent both the spatio-temporal dynamic of crop development and corresponding weather conditions. In this study, we introduce a WI design framework for Germany, which is based on public and open raster data of long-term spatio-temporal availability. The operational process chain enables the dynamic and automatic definition of relevant phenological phases for the main cultivated crops in Germany. Within the temporal bounds, WIs can be calculated for any year and test site in Germany in a reproducible and transparent manner. The workflow is demonstrated on the example of a simple cumulative rainfall index for the phenological phase shooting of winter wheat using 16 test sites and the period between 1994 and 2014. Compared to station-based approaches, the major advantage of our approach is the possibility to design spatial WIs based on raster data characterized by accuracy metrics. Raster data and WIs, which fulfill data quality standards, can contribute to an increased acceptance and farmers' trust in WI products for crop yield modeling or weather index-based insurances (WIIs).

  8. Boundary condition determined wave functions for the ground states of one- and two-electron homonuclear molecules

    NASA Astrophysics Data System (ADS)

    Patil, S. H.; Tang, K. T.; Toennies, J. P.

    1999-10-01

    Simple analytical wave functions satisfying appropriate boundary conditions are constructed for the ground states of one-and two-electron homonuclear molecules. Both the asymptotic condition when one electron is far away and the cusp condition when the electron coalesces with a nucleus are satisfied by the proposed wave function. For H2+, the resulting wave function is almost identical to the Guillemin-Zener wave function which is known to give very good energies. For the two electron systems H2 and He2++, the additional electron-electron cusp condition is rigorously accounted for by a simple analytic correlation function which has the correct behavior not only for r12→0 and r12→∞ but also for R→0 and R→∞, where r12 is the interelectronic distance and R, the internuclear distance. Energies obtained from these simple wave functions agree within 2×10-3 a.u. with the results of the most sophisticated variational calculations for all R and for all systems studied. This demonstrates that rather simple physical considerations can be used to derive very accurate wave functions for simple molecules thereby avoiding laborious numerical variational calculations.

  9. Calculation of local skin doses with ICRP adult mesh-type reference computational phantoms

    NASA Astrophysics Data System (ADS)

    Yeom, Yeon Soo; Han, Haegin; Choi, Chansoo; Nguyen, Thang Tat; Lee, Hanjin; Shin, Bangho; Kim, Chan Hyeong; Han, Min Cheol

    2018-01-01

    Recently, Task Group 103 of the International Commission on Radiological Protection (ICRP) developed new mesh-type reference computational phantoms (MRCPs) for adult males and females in order to address the limitations of the current voxel-type reference phantoms described in ICRP Publication 110 due to their limited voxel resolutions and the nature of the voxel geometry. One of the substantial advantages of the MRCPs over the ICRP-110 reference phantoms is the inclusion of a 50-μm-thick radiosensitive skin basal-cell layer; however, a methodology for calculating the local skin dose (LSD), i.e., the maximum dose to the basal layer averaged over a 1-cm2 area, has yet to be developed. In the present study, a dedicated program for the LSD calculation with the MRCPs was developed based on the mean shift algorithm and the Geant4 Monte Carlo code. The developed program was used to calculate local skin dose coefficients (LSDCs) for electrons and alpha particles, which were then compared with the values given in ICRP Publication 116 that were produced with a simple tissue-equivalent cube model. The results of the present study show that the LSDCs of the MRCPs are generally in good agreement with the ICRP-116 values for alpha particles, but for electrons, significant differences are found at energies higher than 0.15 MeV. The LSDCs of the MRCPs are greater than the ICRP-116 values by as much as 2.7 times at 10 MeV, which is due mainly to the different curvature between realistic MRCPs ( i.e., curved) and the simple cube model ( i.e., flat).

  10. SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.

    PubMed

    Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E

    2018-04-10

    An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.

  11. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  12. A Simple Interactive Software Package for Plotting, Animating, and Calculating

    ERIC Educational Resources Information Center

    Engelhardt, Larry

    2012-01-01

    We introduce a new open source (free) software package that provides a simple, highly interactive interface for carrying out certain mathematical tasks that are commonly encountered in physics. These tasks include plotting and animating functions, solving systems of coupled algebraic equations, and basic calculus (differentiating and integrating…

  13. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  14. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, T; Bush, K

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identifymore » the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.« less

  15. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  16. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  17. MadDM: Computation of dark matter relic abundance

    NASA Astrophysics Data System (ADS)

    Backović, Mihailo; Kong, Kyoungchul; McCaskey, Mathew

    2017-12-01

    MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.

  18. Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.

    2017-03-01

    We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  19. Optical coatings for improved contrast in longitudinal magneto-optic Kerr effect measurements

    NASA Astrophysics Data System (ADS)

    Cantwell, P. R.; Gibson, U. J.; Allwood, D. A.; Macleod, H. A. M.

    2006-11-01

    We have studied the increases in the longitudinal magneto-optic Kerr effect signal contrast that can be achieved by the application of optical overlayers on magnetic films. For simple coatings, a factor of ˜3 improvement in signal contrast is possible. Matching the optical impedance of the magnetic material improves the raw Kerr signal and also reduces the sample reflectivity, yielding a large Kerr angle. The contrast can be optimized by increasing the rotated Kerr reflectivity component while maintaining enough of the base reflectivity Fresnel component to produce a strong signal. Calculations and experimental results are presented for single layer ZrO2 dielectric coatings on Ni along with calculations for a three-layer Au -ZrO2-Ni structure. Incidence angle effects are also presented.

  20. A suggested periodic table up to Z≤ 172, based on Dirac-Fock calculations on atoms and ions.

    PubMed

    Pyykkö, Pekka

    2011-01-07

    Extended Average Level (EAL) Dirac-Fock calculations on atoms and ions agree with earlier work in that a rough shell-filling order for the elements 119-172 is 8s < 5g≤ 8p(1/2) < 6f < 7d < 9s < 9p(1/2) < 8p(3/2). The present Periodic Table develops further that of Fricke, Greiner and Waber [Theor. Chim. Acta 1971, 21, 235] by formally assigning the elements 121-164 to (nlj) slots on the basis of the electron configurations of their ions. Simple estimates are made for likely maximum oxidation states, i, of these elements M in their MX(i) compounds, such as i = 6 for UF(6). Particularly high i are predicted for the 6f elements.

  1. Rational choice and the political bases of changing Israeli counterinsurgency strategy.

    PubMed

    Brym, Robert J; Andersen, Robert

    2011-09-01

    Israeli counterinsurgency doctrine holds that the persistent use of credible threat and disproportionate military force results in repeated victories that eventually teach the enemy the futility of aggression. The doctrine thus endorses classical rational choice theory's claim that narrow cost-benefit calculations shape fixed action rationales. This paper assesses whether Israel's strategic practice reflects its counterinsurgency doctrine by exploring the historical record and the association between Israeli and Palestinian deaths due to low-intensity warfare. In contrast to the expectations of classical rational choice theory, the evidence suggests that institutional, cultural and historical forces routinely override simple cost-benefit calculations. Changing domestic and international circumstances periodically cause revisions in counterinsurgency strategy. Credible threat and disproportionate military force lack the predicted long-term effect. © London School of Economics and Political Science 2011.

  2. Safety performance factor.

    PubMed

    Venkataraman, Naray

    2008-01-01

    Workplace safety performance is computed using frequency rate (FR) and severity rate (SR). Only work time lost due to occupational incidents that need to be reported is counted. FR and SR are the 2 most important safety performance indicators that are applied universally; however, calculations differ from country to country. All injuries and time lost should be considered while calculating safety performance. The extent of severity does not matter as every incident is counted. So, a new factor has to be defined; it should be based on the hours or days lost due to each occupational incident, irrespective of its severity. The new safety performance factor is defined as the average human-hour unit lost due to occupational accidents/incidents, including fatalities, first-aid incidents, bruises and cuts. The formula is simple and easy to apply.

  3. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  4. Calculation of the change in corneal astigmatism following cataract extraction.

    PubMed

    Cravy, T V

    1979-01-01

    Obtaining a minimal amount of postoperative astigmatism following cataract surgery is becoming increasingly important. One aspect of the patient's surgery which should not be overlooked is the preoperative keratometry which provides a basis for preoperative planning of surgical technique to be used and a point of reference for determining the amount of change in astigmatism produced by the surgery. Analysis of the surgically induced change in astigmatism using the calculations described in this paper will allow the surgeon to evaluate his own techniques and to maximize his potential for obtaining consistently good postoperative astigmatic results without the need for suture removal. The method presented is based upon concepts in common use in surgical ophthalmology and requires only simple mathematical procedures, familiar to all with a background in algebra and trigonometry.

  5. Quantifying Environmental Effects on the Decay of Hole Transfer Couplings in Biosystems.

    PubMed

    Ramos, Pablo; Pavanello, Michele

    2014-06-10

    In the past two decades, many research groups worldwide have tried to understand and categorize simple regimes in the charge transfer of such biological systems as DNA. Theoretically speaking, the lack of exact theories for electron-nuclear dynamics on one side and poor quality of the parameters needed by model Hamiltonians and nonadiabatic dynamics alike (such as couplings and site energies) on the other are the two main difficulties for an appropriate description of the charge transfer phenomena. In this work, we present an application of a previously benchmarked and linear-scaling subsystem density functional theory (DFT) method for the calculation of couplings, site energies, and superexchange decay factors (β) of several biological donor-acceptor dyads, as well as double stranded DNA oligomers composed of up to five base pairs. The calculations are all-electron and provide a clear view of the role of the environment on superexchange couplings in DNA-they follow experimental trends and confirm previous semiempirical calculations. The subsystem DFT method is proven to be an excellent tool for long-range, bridge-mediated coupling and site energy calculations of embedded molecular systems.

  6. Comparison between ray-tracing and physical optics for the computation of light absorption in capillaries--the influence of diffraction and interference.

    PubMed

    Qin, Yuan; Michalowski, Andreas; Weber, Rudolf; Yang, Sen; Graf, Thomas; Ni, Xiaowu

    2012-11-19

    Ray-tracing is the commonly used technique to calculate the absorption of light in laser deep-penetration welding or drilling. Since new lasers with high brilliance enable small capillaries with high aspect ratios, diffraction might become important. To examine the applicability of the ray-tracing method, we studied the total absorptance and the absorbed intensity of polarized beams in several capillary geometries. The ray-tracing results are compared with more sophisticated simulations based on physical optics. The comparison shows that the simple ray-tracing is applicable to calculate the total absorptance in triangular grooves and in conical capillaries but not in rectangular grooves. To calculate the distribution of the absorbed intensity ray-tracing fails due to the neglected interference, diffraction, and the effects of beam propagation in the capillaries with sub-wavelength diameter. If diffraction is avoided e.g. with beams smaller than the entrance pupil of the capillary or with very shallow capillaries, the distribution of the absorbed intensity calculated by ray-tracing corresponds to the local average of the interference pattern found by physical optics.

  7. Model for Vortex Ring State Influence on Rotorcraft Flight Dynamics

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2005-01-01

    The influence of vortex ring state (VRS) on rotorcraft flight dynamics is investigated, specifically the vertical velocity drop of helicopters and the roll-off of tiltrotors encountering VRS. The available wind tunnel and flight test data for rotors in vortex ring state are reviewed. Test data for axial flow, non-axial flow, two rotors, unsteadiness, and vortex ring state boundaries are described and discussed. Based on the available measured data, a VRS model is developed. The VRS model is a parametric extension of momentum theory for calculation of the mean inflow of a rotor, hence suitable for simple calculations and real-time simulations. This inflow model is primarily defined in terms of the stability boundary of the aircraft motion. Calculations of helicopter response during VRS encounter were performed, and good correlation is shown with the vertical velocity drop measured in flight tests. Calculations of tiltrotor response during VRS encounter were performed, showing the roll-off behavior characteristic of tiltrotors. Hence it is possible, using a model of the mean inflow of an isolated rotor, to explain the basic behavior of both helicopters and tiltrotors in vortex ring state.

  8. Approximate solution of the mode-mode coupling integral: Application to cytosine and its deuterated derivative.

    PubMed

    Rasheed, Tabish; Ahmad, Shabbir

    2010-10-01

    Ab initio Hartree-Fock (HF), density functional theory (DFT) and second-order Møller-Plesset (MP2) methods were used to perform harmonic and anharmonic calculations for the biomolecule cytosine and its deuterated derivative. The anharmonic vibrational spectra were computed using the vibrational self-consistent field (VSCF) and correlation-corrected vibrational self-consistent field (CC-VSCF) methods. Calculated anharmonic frequencies have been compared with the argon matrix spectra reported in literature. The results were analyzed with focus on the properties of anharmonic couplings between pair of modes. A simple and easy to use formula for calculation of mode-mode coupling magnitudes has been derived. The key element in present approach is the approximation that only interactions between pairs of normal modes have been taken into account, while interactions of triples or more are neglected. FTIR and Raman spectra of solid state cytosine have been recorded in the regions 400-4000 cm(-1) and 60-4000 cm(-1), respectively. Vibrational analysis and assignments are based on calculated potential energy distribution (PED) values. Copyright 2010 Elsevier B.V. All rights reserved.

  9. GOssTo: a stand-alone application and a web tool for calculating semantic similarities on the Gene Ontology.

    PubMed

    Caniza, Horacio; Romero, Alfonso E; Heron, Samuel; Yang, Haixuan; Devoto, Alessandra; Frasca, Marco; Mesiti, Marco; Valentini, Giorgio; Paccanaro, Alberto

    2014-08-01

    We present GOssTo, the Gene Ontology semantic similarity Tool, a user-friendly software system for calculating semantic similarities between gene products according to the Gene Ontology. GOssTo is bundled with six semantic similarity measures, including both term- and graph-based measures, and has extension capabilities to allow the user to add new similarities. Importantly, for any measure, GOssTo can also calculate the Random Walk Contribution that has been shown to greatly improve the accuracy of similarity measures. GOssTo is very fast, easy to use, and it allows the calculation of similarities on a genomic scale in a few minutes on a regular desktop machine. alberto@cs.rhul.ac.uk GOssTo is available both as a stand-alone application running on GNU/Linux, Windows and MacOS from www.paccanarolab.org/gossto and as a web application from www.paccanarolab.org/gosstoweb. The stand-alone application features a simple and concise command line interface for easy integration into high-throughput data processing pipelines. © The Author 2014. Published by Oxford University Press.

  10. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  11. The GHG and Land Demand Consequences of the US Animal-Based Food Consumption

    NASA Astrophysics Data System (ADS)

    Martin, P. A.; Eshel, G.

    2008-12-01

    While the environmental burdens exerted by food production are addresses by several recent publications, the contributions of animal-based food production, and in particular red meat---by far the most environmentally exacting of all large-scale animal-based foods---are less well quantified. We present several simple calculations that quantify some environmental costs of animal- and cattle-based food production. First, we show that American red meat is, on average, 350% more GHG-intensive per edible calorie than the national food system's mean. Second, we show that the per calorie land-use efficiencies of fruit and beans are 5 and 3 times that of animal-based foods. That is, an animal-based edible calorie requires the same amounts of land as 5 fruit calories or 3 bean calories. We conclude with highlighting the importance of these results to policy makers by calculating the mass flux into the environment of fertilizer and herbicide that will be averted by reducing or eliminating animal-based foods from the mean US diet. This also enables us to make preliminary quantitative statements about expected changes to the size and probability of Gulf of Mexico anoxic events of a certain O2 depletion levels that are likely to accompany specific dietary shifts.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  13. Conformational analysis of α-helical polypeptide included L-proline residue by high-resolution solid-state NMR measurement and quantum chemical calculation

    NASA Astrophysics Data System (ADS)

    Souma, Hiroyuki; Shoji, Akira; Kurosu, Hiromichi

    2008-10-01

    We challenged the problem about the stabilization mechanism of an α-helix formation for polypeptides containing L-proline (Pro) residue. We computed the optimized structure of α-helical poly( L-alanine) molecules including a Pro residue, H-(Ala) 8-Pro-(Ala) 9-OH, based on the molecular orbital calculation with density functional theory, B3LYP/6-31G(d) and the 13C and 15N chemical shift values based on the GIAO-CHF method with B3LYP/6-311G(d,p), respectively. It was found that two kinds of optimized structures, 'Bent structure' and 'Included α-helix structure', were preferred structures in H-(Ala) 8-Pro-(Ala) 9-OH. In addition, based on the precise 13C and 15N chemical shift data of the simple model, we successfully analyzed the secondary structure of well-defined synthetic polypeptide H-(Phe-Leu-Ala) 3-Phe C-Pro-Ala N-(Phe-Leu-Ala) 2-OH (FLA-11P), the secondary structure of which was proven to the 'Included α-helix structure'.

  14. SU-G-201-15: Nomogram as an Efficient Dosimetric Verification Tool in HDR Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, J; Todor, D

    Purpose: Nomogram as a simple QA tool for HDR prostate brachytherapy treatment planning has been developed and validated clinically. Reproducibility including patient-to-patient and physician-to-physician variability was assessed. Methods: The study was performed on HDR prostate implants from physician A (n=34) and B (n=15) using different implant techniques and planning methodologies. A nomogram was implemented as an independent QA of computer-based treatment planning before plan execution. Normalized implant strength (total air kerma strength Sk*t in cGy cm{sup 2} divided by prescribed dose in cGy) was plotted as a function of PTV volume and total V100. A quadratic equation was used tomore » fit the data with R{sup 2} denoting the model predictive power. Results: All plans showed good target coverage while OARs met the dose constraint guidelines. Vastly different implant and planning styles were reflected on conformity index (entire dose matrix V100/PTV volume, physician A implants: 1.27±0.14, physician B: 1.47±0.17) and PTV V150/PTV volume ratio (physician A: 0.34±0.09, physician B: 0.24±0.07). The quadratic model provided a better fit for the curved relationship between normalized implant strength and total V100 (or PTV volume) than a simple linear function. Unlike the normalized implant strength versus PTV volume nomogram which differed between physicians, a unique quadratic model based nomogram (Sk*t)/D=−0.0008V2+0.0542V+1.1185 (R{sup 2}=0.9977) described the dependence of normalized implant strength on total V100 over all the patients from both physicians despite two different implant and planning philosophies. Normalized implant strength - total V100 model also generated less deviant points distorting the smoothed ones with a significantly higher correlation. Conclusion: A simple and universal, excel-based nomogram was created as an independent calculation tool for HDR prostate brachytherapy. Unlike similar attempts, our nomogram is insensitive to implant style and does not rely on reproducing dose calculations using TG-43 formalism, thus making it a truly independent check.« less

  15. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    PubMed

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  16. Calculation and Specification of the Multiple Chirality Displayed by Sugar Pyranoid Ring Structures.

    ERIC Educational Resources Information Center

    Shallenberger, Robert S.; And Others

    1981-01-01

    Describes a method, using simple algebraic notation, for calculating the nature of the salient features of a sugar pyranoid ring, the steric disposition of substituents about the reference, and the anomeric carbon atoms contained within the ring. (CS)

  17. Dose calculation of dynamic trajectory radiotherapy using Monte Carlo.

    PubMed

    Manser, P; Frauchiger, D; Frei, D; Volken, W; Terribilini, D; Fix, M K

    2018-04-06

    Using volumetric modulated arc therapy (VMAT) delivery technique gantry position, multi-leaf collimator (MLC) as well as dose rate change dynamically during the application. However, additional components can be dynamically altered throughout the dose delivery such as the collimator or the couch. Thus, the degrees of freedom increase allowing almost arbitrary dynamic trajectories for the beam. While the dose delivery of such dynamic trajectories for linear accelerators is technically possible, there is currently no dose calculation and validation tool available. Thus, the aim of this work is to develop a dose calculation and verification tool for dynamic trajectories using Monte Carlo (MC) methods. The dose calculation for dynamic trajectories is implemented in the previously developed Swiss Monte Carlo Plan (SMCP). SMCP interfaces the treatment planning system Eclipse with a MC dose calculation algorithm and is already able to handle dynamic MLC and gantry rotations. Hence, the additional dynamic components, namely the collimator and the couch, are described similarly to the dynamic MLC by defining data pairs of positions of the dynamic component and the corresponding MU-fractions. For validation purposes, measurements are performed with the Delta4 phantom and film measurements using the developer mode on a TrueBeam linear accelerator. These measured dose distributions are then compared with the corresponding calculations using SMCP. First, simple academic cases applying one-dimensional movements are investigated and second, more complex dynamic trajectories with several simultaneously moving components are compared considering academic cases as well as a clinically motivated prostate case. The dose calculation for dynamic trajectories is successfully implemented into SMCP. The comparisons between the measured and calculated dose distributions for the simple as well as for the more complex situations show an agreement which is generally within 3% of the maximum dose or 3mm. The required computation time for the dose calculation remains the same when the additional dynamic moving components are included. The results obtained for the dose comparisons for simple and complex situations suggest that the extended SMCP is an accurate dose calculation and efficient verification tool for dynamic trajectory radiotherapy. This work was supported by Varian Medical Systems. Copyright © 2018. Published by Elsevier GmbH.

  18. An expert system for the design of heating, ventilating, and air-conditioning systems

    NASA Astrophysics Data System (ADS)

    Camejo, Pedro Jose

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are needed and have been developed to join the separate knowledge bases into one simple-to-use program unit.

  19. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  20. A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. I. Model description and uncertainty analysis.

    PubMed

    Traas, T P; Luttik, R; Jongbloed, R H

    1996-08-01

    In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.

Top