Sample records for perform basic calculations

  1. Basic and Exceptional Calculation Abilities in a Calculating Prodigy: A Case Study.

    ERIC Educational Resources Information Center

    Pesenti, Mauro; Seron, Xavier; Samson, Dana; Duroux, Bruno

    1999-01-01

    Describes the basic and exceptional calculation abilities of a calculating prodigy whose performances were investigated in single- and multi-digit number multiplication, numerical comparison, raising of powers, and short-term memory tasks. Shows how his highly efficient long-term memory storage and retrieval processes, knowledge of calculation…

  2. Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    2011-01-01

    Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less

  3. 6 CFR 1001.10 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... in processing your FOIA request. Fees may be charged for search, review or duplication. As a matter... searches for records, we will charge the salary rate(s) (calculated as the basic rate of pay plus 16 percent of that basic rate to cover benefits) of the employee(s) performing the search. (c) In calculating...

  4. BASIC Programming In Water And Wastewater Analysis

    NASA Technical Reports Server (NTRS)

    Dreschel, Thomas

    1988-01-01

    Collection of computer programs assembled for use in water-analysis laboratories. First program calculates quality-control parameters used in routine water analysis. Second calculates line of best fit for standard concentrations and absorbances entered. Third calculates specific conductance from conductivity measurement and temperature at which measurement taken. Fourth calculates any one of four types of residue measured in water. Fifth, sixth, and seventh calculate results of titrations commonly performed on water samples. Eighth converts measurements, to actual dissolved-oxygen concentration using oxygen-saturation values for fresh and salt water. Ninth and tenth perform calculations of two other common titrimetric analyses. Eleventh calculates oil and grease residue from water sample. Last two use spectro-photometric measurements of absorbance at different wavelengths and residue measurements. Programs included in collection written for Hewlett-Packard 2647F in H-P BASIC.

  5. Patient safety: numerical skills and drug calculation abilities of nursing students and registered nurses.

    PubMed

    McMullan, Miriam; Jones, Ray; Lea, Susan

    2010-04-01

    This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.

  6. Calculating Student Grades.

    ERIC Educational Resources Information Center

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  7. Remedial Instruction to Enhance Mathematical Ability of Dyscalculics

    ERIC Educational Resources Information Center

    Kumar, S. Praveen; Raja, B. William Dharma

    2012-01-01

    The ability to do arithmetic calculations is essential to school-based learning and skill development in an information rich society. Arithmetic is a basic academic skill that is needed for learning which includes the skills such as counting, calculating, reasoning etc. that are used for performing mathematical calculations. Unfortunately, many…

  8. Principles and application of shock-tubes and shock tunnels

    NASA Technical Reports Server (NTRS)

    Ried, R. C.; Clauss, H. G., Jr.

    1963-01-01

    The principles, theoretical flow equations, calculation techniques, limitations and practical performance characteristics of basic and high performance shock tubes and shock tunnels are presented. Selected operating curves are included.

  9. Evaluation of a Numeracy Intervention Program Focusing on Basic Numerical Knowledge and Conceptual Knowledge: A Pilot Study.

    ERIC Educational Resources Information Center

    Kaufmann, Liane; Handl, Pia; Thony, Brigitte

    2003-01-01

    In this study, six elementary grade children with developmental dyscalculia were trained individually and in small group settings with a one-semester program stressing basic numerical knowledge and conceptual knowledge. All the children showed considerable and partly significant performance increases on all calculation components. Results suggest…

  10. Extreme Basicity of Biguanide Drugs in Aqueous Solutions: Ion Transfer Voltammetry and DFT Calculations.

    PubMed

    Langmaier, Jan; Pižl, Martin; Samec, Zdeněk; Záliš, Stanislav

    2016-09-22

    Ion transfer voltammetry is used to estimate the acid dissociation constants Ka1 and Ka2 of the mono- and diprotonated forms of the biguanide drugs metformin (MF), phenformin (PF), and 1-phenylbiguanide (PB) in an aqueous solution. Measurements gave the pKa1 values for MFH(+), PFH(+), and PBH(+) characterizing the basicity of MF, PF, and PB, which are significantly higher than those reported in the literature. As a result, the monoprotonated forms of these biguanides should prevail in a considerably broader range of pH 1-15 (MFH(+), PFH(+)) and 2-13 (PBH(+)). DFT calculations with solvent correction were performed for possible tautomeric forms of neutral, monoprotonated, and diprotonated species. Extreme basicity of all drugs is confirmed by DFT calculations of pKa1 for the most stable tautomers of the neutral and protonated forms with explicit water molecules in the first solvation sphere included.

  11. BASIC Data Manipulation And Display System (BDMADS)

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.

    1983-01-01

    BDMADS, a BASIC Data Manipulation and Display System, is a collection of software programs that run on an Apple II Plus personal computer. BDMADS provides a user-friendly environment for the engineer in which to perform scientific data processing. The computer programs and their use are described. Jet engine performance calculations are used to illustrate the use of BDMADS. Source listings of the BDMADS programs are provided and should permit users to customize the programs for their particular applications.

  12. TI-59 PROGRAMMABLE CALCULATOR PROGRAMS FOR IN-STACK OPACITY, VENTURI SCRUBBERS, AND ELECTROSTATIC PRECIPITATORS

    EPA Science Inventory

    The report explains the basic concepts of in-stack opacity as measured by in-stack opacity monitors. Also included are calculator programs that model the performance of venturi scrubbers and electrostatic precipitators. The effect of particulate control devices on in-stack opacit...

  13. Radiation treatment of pharmaceuticals

    NASA Astrophysics Data System (ADS)

    Dám, A. M.; Gazsó, L. G.; Kaewpila, S.; Maschek, I.

    1996-03-01

    Product specific doses were calculated for pharmaceuticals to be radiation treated. Radio-pasteurization dose were determined for some heat sensitive pharmaceutical basic materials (pancreaton, neopancreatin, neopancreatin USP, duodenum extract). Using the new recommendation (ISO standards, Method 1) dose calculations were performed and radiation sterilization doses were determined for aprotinine and heparine Na.

  14. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  15. Space-Plane Spreadsheet Program

    NASA Technical Reports Server (NTRS)

    Mackall, Dale

    1993-01-01

    Basic Hypersonic Data and Equations (HYPERDATA) spreadsheet computer program provides data gained from three analyses of performance of space plane. Equations used to perform analyses derived from Newton's second law of physics, derivation included. First analysis is parametric study of some basic factors affecting ability of space plane to reach orbit. Second includes calculation of thickness of spherical fuel tank. Third produces ratio between volume of fuel and total mass for each of various aircraft. HYPERDATA intended for use on Macintosh(R) series computers running Microsoft Excel 3.0.

  16. Undergraduate paramedic students cannot do drug calculations.

    PubMed

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.

  17. Basic numerical processing, calculation, and working memory in children with dyscalculia and/or ADHD symptoms.

    PubMed

    Kuhn, Jörg-Tobias; Ise, Elena; Raddatz, Julia; Schwenk, Christin; Dobel, Christian

    2016-09-01

    Deficits in basic numerical skills, calculation, and working memory have been found in children with developmental dyscalculia (DD) as well as children with attention-deficit/hyperactivity disorder (ADHD). This paper investigates cognitive profiles of children with DD and/or ADHD symptoms (AS) in a double dissociation design to obtain a better understanding of the comorbidity of DD and ADHD. Children with DD-only (N = 33), AS-only (N = 16), comorbid DD+AS (N = 20), and typically developing controls (TD, N = 40) were assessed on measures of basic numerical processing, calculation, working memory, processing speed, and neurocognitive measures of attention. Children with DD (DD, DD+AS) showed deficits in all basic numerical skills, calculation, working memory, and sustained attention. Children with AS (AS, DD+AS) displayed more selective difficulties in dot enumeration, subtraction, verbal working memory, and processing speed. Also, they generally performed more poorly in neurocognitive measures of attention, especially alertness. Children with DD+AS mostly showed an additive combination of the deficits associated with DD-only and A_Sonly, except for subtraction tasks, in which they were less impaired than expected. DD and AS appear to be related to largely distinct patterns of cognitive deficits, which are present in combination in children with DD+AS.

  18. Types of Learning Problems

    MedlinePlus

    ... Dyscalculia is defined as difficulty performing mathematical calculations. Math is problematic for many students, but dyscalculia may prevent a teenager from grasping even basic math concepts. Auditory Memory and Processing Disabilities Auditory memory ...

  19. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    NASA Astrophysics Data System (ADS)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  20. Undergraduate paramedic students cannot do drug calculations

    PubMed Central

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067

  1. DOE Fundamentals Handbook: Mathematics, Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-06-01

    The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclearmore » facility operations.« less

  2. DOE Fundamentals Handbook: Mathematics, Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-06-01

    The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclearmore » facility operations.« less

  3. Numeracy skills of undergraduate entry level nurse, midwife and pharmacy students.

    PubMed

    Arkell, Sharon; Rutter, Paul M

    2012-07-01

    The ability of healthcare professionals to perform basic numeracy and therefore dose calculations competently is without question. Research has primarily focused on nurses, and to a lesser extent doctors, ability to perform this function with findings highlighting poor aptitude. Studies involving pharmacists are few but findings are more positive than other healthcare staff. To determine first year nursing, midwifery and pharmacy students ability to perform basic numeracy calculations. All new undergraduate entrants to nursing, midwifery and pharmacy sat a formative numeracy test within the first two weeks of their first year of study. Test results showed that pharmacy students significantly outperformed midwifery and nursing students on all questions. In turn midwifery students outperformed nurses, although this did not achieve significance. When looking at each cohorts general attitude towards mathematics, pharmacy students were more positive and confident compared to midwifery and nursing students. Pharmacy students expressed greater levels of enjoyment and confidence in performing mathematics and correspondingly showed the greatest proficiency. In contrast nurse, and to a lesser extent midwifery students showed poor performance and low confidence levels. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Calculated performance, stability and maneuverability of high-speed tilting-prop-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Lau, Benton H.; Bowles, Jeffrey V.

    1986-01-01

    The feasibility of operating tilting-prop-rotor aircraft at high speeds is examined by calculating the performance, stability, and maneuverability of representative configurations. The rotor performance is examined in high-speed cruise and in hover. The whirl-flutter stability of the coupled-wing and rotor motion is calculated in the cruise mode. Maneuverability is examined in terms of the rotor-thrust limit during turns in helicopter configuration. Rotor airfoils, rotor-hub configuration, wing airfoil, and airframe structural weights representing demonstrated advance technology are discussed. Key rotor and airframe parameters are optimized for high-speed performance and stability. The basic aircraft-design parameters are optimized for minimum gross weight. To provide a focus for the calculations, two high-speed tilt-rotor aircraft are considered: a 46-passenger, civil transport and an air-combat/escort fighter, both with design speeds of about 400 knots. It is concluded that such high-speed tilt-rotor aircraft are quite practical.

  5. Workbook, Basic Mathematics and Wastewater Processing Calculations.

    ERIC Educational Resources Information Center

    New York State Dept. of Environmental Conservation, Albany.

    This workbook serves as a self-learning guide to basic mathematics and treatment plant calculations and also as a reference and source book for the mathematics of sewage treatment and processing. In addition to basic mathematics, the workbook discusses processing and process control, laboratory calculations and efficiency calculations necessary in…

  6. Assessing pediatrics residents' mathematical skills for prescribing medication: a need for improved training.

    PubMed

    Glover, Mark L; Sussmane, Jeffrey B

    2002-10-01

    To evaluate residents' skills in performing basic mathematical calculations used for prescribing medications to pediatric patients. In 2001, a test of ten questions on basic calculations was given to first-, second-, and third-year residents at Miami Children's Hospital in Florida. Four additional questions were included to obtain the residents' levels of training, specific pediatrics intensive care unit (PICU) experience, and whether or not they routinely double-checked doses and adjusted them for each patient's weight. The test was anonymous and calculators were permitted. The overall score and the score for each resident class were calculated. Twenty-one residents participated. The overall average test score and the mean test score of each resident class was less than 70%. Second-year residents had the highest mean test scores, although there was no significant difference between the classes of residents (p =.745) or relationship between the residents' PICU experiences and their exam scores (p =.766). There was no significant difference between residents' levels of training and whether they double-checked their calculations (p =.633) or considered each patient's weight relative to the dose prescribed (p =.869). Seven residents committed tenfold dosing errors, and one resident committed a 1,000-fold dosing error. Pediatrics residents need to receive additional education in performing the calculations needed to prescribe medications. In addition, residents should be required to demonstrate these necessary mathematical skills before they are allowed to prescribe medications.

  7. Using an Intelligent Tutor and Math Fluency Training to Improve Math Performance

    ERIC Educational Resources Information Center

    Arroyo, Ivon; Royer, James M.; Woolf, Beverly P.

    2011-01-01

    This article integrates research in intelligent tutors with psychology studies of memory and math fluency (the speed to retrieve or calculate answers to basic math operations). It describes the impact of computer software designed to improve either strategic behavior or math fluency. Both competencies are key to improved performance and both…

  8. A Visual Basic program for analyzing oedometer test results and evaluating intergranular void ratio

    NASA Astrophysics Data System (ADS)

    Monkul, M. Murat; Önal, Okan

    2006-06-01

    A visual basic program (POCI) is proposed and explained in order to analyze oedometer test results. Oedometer test results have vital importance from geotechnical point of view, since settlement requirements usually control the design of foundations. The software POCI is developed in order perform the necessary calculations for convential oedometer test. The change of global void ratio and stress-strain characteristics can be observed both numerically and graphically. It enables the users to calculate some parameters such as coefficient of consolidation, compression index, recompression index, and preconsolidation pressure depending on the type and stress history of the soil. Moreover, it adopts the concept of intergranular void ratio which may be important especially in the compression behavior of sandy soils. POCI shows the variation of intergranular void ratio and also enables the users to calculate granular compression index.

  9. Neutronic Calculation Analysis for CN HCCB TBM-Set

    NASA Astrophysics Data System (ADS)

    Cao, Qixiang; Zhao, Fengchao; Zhao, Zhou; Wu, Xinghua; Li, Zaixin; Wang, Xiaoyu; Feng, Kaiming

    2015-07-01

    Using the Monte Carlo transport code MCNP, neutronic calculation analysis for China helium cooled ceramic breeder test blanket module (CN HCCB TBM) and the associated shield block (together called TBM-set) has been carried out based on the latest design of HCCB TBM-set and C-lite model. Key nuclear responses of HCCB TBM-set, such as the neutron flux, tritium production rate, nuclear heating and radiation damage, have been obtained and discussed. These nuclear performance data can be used as the basic input data for other analyses of HCCB TBM-set, such as thermal-hydraulics, thermal-mechanics and safety analysis. supported by the Major State Basic Research Development Program of China (973 Program) (No. 2013GB108000)

  10. Spatial structure and electronic spectrum of TiSi{/n -} clusters ( n = 6-18)

    NASA Astrophysics Data System (ADS)

    Borshch, N. A.; Pereslavtseva, N. S.; Kurganskii, S. I.

    2014-10-01

    Results from optimizing the spatial structure and calculated electronic spectra of anion clusters TiSi{/n -} ( n = 6-18) are presented. Calculations are performed within the density functional theory. Spatial structures of clusters detected experimentally are established by comparing the calculated and experimental data. It is shown that prismatic and fullerene-like structures are the ones most energetically favorable for clusters TiSi{/n -}. It is concluded that these structures are basic when building clusters with close numbers of silicon atoms.

  11. Numerical analysis of ion wind flow using space charge for optimal design

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Shin, Dong Ho; Baek, Soo Hong

    2014-11-01

    Ion wind flow has been widly studied for its advantages of a micro fluidic device. However, it is very difficult to predict the performance of the ion wind flow for various conditions because of its complicated electrohydrodynamic phenomena. Thus, a reliable numerical modeling is required to design an otimal ion wind generator and calculate velocity of the ion wind for the proper performance. In this study, the numerical modeling of the ion wind has been modified and newly defined to calculate the veloctiy of the ion wind flow by combining three basic models such as electrostatics, electrodynamics and fluid dynamics. The model has included presence of initial space charges to calculate transfer energy between space charges and air gas molecules using a developed space charge correlation. The simulation has been performed for a geometry of a pin to parallel plate electrode. Finally, the results of the simulation have been compared with the experimental data for the ion wind velocity to confirm the accuracy of the modified numerical modeling and to obtain the optimal design of the ion wind generator. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).

  12. Basic Performance Test of a Prototype PET Scanner Using CdTe Semiconductor Detectors

    NASA Astrophysics Data System (ADS)

    Ueno, Y.; Morimoto, Y.; Tsuchiya, K.; Yanagita, N.; Kojima, S.; Ishitsu, T.; Kitaguchi, H.; Kubo, N.; Zhao, S.; Tamaki, N.; Amemiya, K.

    2009-02-01

    A prototype positron emission tomography (PET) scanner using CdTe semiconductor detectors was developed, and its initial evaluation was conducted. The scanner was configured to form a single detector ring with six separated detector units, each having 96 detectors arranged in three detector layers. The field of view (FOV) size was 82 mm in diameter. Basic physical performance indicators of the scanner were measured through phantom studies and confirmed by rat imaging. The system-averaged energy resolution and timing resolution were 5.4% and 6.0 ns (each in FWHM) respectively. Spatial resolution measured at FOV center was 2.6 mm FWHM. Scatter fraction was measured and calculated in a National Electrical Manufacturers Association (NEMA)-fashioned manner using a 3-mm diameter hot capillary in a water-filled 80-mm diameter acrylic cylinder. The calculated result was 3.6%. Effect of depth of interaction (DOI) measurement was demonstrated by comparing hot-rod phantom images reconstructed with and without DOI information. Finally, images of a rat myocardium and an implanted tumor were visually assessed, and the imaging performance was confirmed.

  13. General airplane performance

    NASA Technical Reports Server (NTRS)

    Rockfeller, W C

    1939-01-01

    Equations have been developed for the analysis of the performance of the ideal airplane, leading to an approximate physical interpretation of the performance problem. The basic sea-level airplane parameters have been generalized to altitude parameters and a new parameter has been introduced and physically interpreted. The performance analysis for actual airplanes has been obtained in terms of the equivalent ideal airplane in order that the charts developed for use in practical calculations will for the most part apply to any type of engine-propeller combination and system of control, the only additional material required consisting of the actual engine and propeller curves for propulsion unit. Finally, a more exact method for the calculation of the climb characteristics for the constant-speed controllable propeller is presented in the appendix.

  14. CALCMIN - an EXCEL™ Visual Basic application for calculating mineral structural formulae from electron microprobe analyses

    NASA Astrophysics Data System (ADS)

    Brandelik, Andreas

    2009-07-01

    CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.

  15. GaAs Solar Cell Radiation Handbook

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.

    1996-01-01

    The handbook discusses the history of GaAs solar cell development, presents equations useful for working with GaAs solar cells, describes commonly used instrumentation techniques for assessing radiation effects in solar cells and fundamental processes occurring in solar cells exposed to ionizing radiation, and explains why radiation decreases the electrical performance of solar cells. Three basic elements required to perform solar array degradation calculations: degradation data for GaAs solar cells after irradiation with 1 MeV electrons at normal incidence; relative damage coefficients for omnidirectional electron and proton exposure; and the definition of the space radiation environment for the orbit of interest, are developed and used to perform a solar array degradation calculation.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jing-Jy; Flood, Paul E.; LePoire, David

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less

  17. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.

  18. Findings of Studies on Dyscalculia--A Synthesis

    ERIC Educational Resources Information Center

    Raja, B. William Dharma; Kumar, S. Praveen

    2012-01-01

    Children with learning disabilities face problems in acquiring the basic skills needed for learning. Dyscalculia is one among those learning disorders which affects the ability to acquire arithmetic skills that are needed to perform mathematical calculations. However this is a learning difficulty which is often not recognized. The objectives of…

  19. Quantum Computation Using Optically Coupled Quantum Dot Arrays

    NASA Technical Reports Server (NTRS)

    Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.

  20. Stratification of complexity in congenital heart surgery: comparative study of the Risk Adjustment for Congenital Heart Surgery (RACHS-1) method, Aristotle basic score and Society of Thoracic Surgeons-European Association for Cardio- Thoracic Surgery (STS-EACTS) mortality score.

    PubMed

    Cavalcanti, Paulo Ernando Ferraz; Sá, Michel Pompeu Barros de Oliveira; Santos, Cecília Andrade dos; Esmeraldo, Isaac Melo; Chaves, Mariana Leal; Lins, Ricardo Felipe de Albuquerque; Lima, Ricardo de Carvalho

    2015-01-01

    To determine whether stratification of complexity models in congenital heart surgery (RACHS-1, Aristotle basic score and STS-EACTS mortality score) fit to our center and determine the best method of discriminating hospital mortality. Surgical procedures in congenital heart diseases in patients under 18 years of age were allocated to the categories proposed by the stratification of complexity methods currently available. The outcome hospital mortality was calculated for each category from the three models. Statistical analysis was performed to verify whether the categories presented different mortalities. The discriminatory ability of the models was determined by calculating the area under the ROC curve and a comparison between the curves of the three models was performed. 360 patients were allocated according to the three methods. There was a statistically significant difference between the mortality categories: RACHS-1 (1) - 1.3%, (2) - 11.4%, (3)-27.3%, (4) - 50 %, (P<0.001); Aristotle basic score (1) - 1.1%, (2) - 12.2%, (3) - 34%, (4) - 64.7%, (P<0.001); and STS-EACTS mortality score (1) - 5.5 %, (2) - 13.6%, (3) - 18.7%, (4) - 35.8%, (P<0.001). The three models had similar accuracy by calculating the area under the ROC curve: RACHS-1- 0.738; STS-EACTS-0.739; Aristotle- 0.766. The three models of stratification of complexity currently available in the literature are useful with different mortalities between the proposed categories with similar discriminatory capacity for hospital mortality.

  1. Sensitivity analysis of TRX-2 lattice parameters with emphasis on epithermal /sup 238/U capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.

    1977-03-01

    The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less

  2. Frequency of Home Numeracy Activities Is Differentially Related to Basic Number Processing and Calculation Skills in Kindergartners.

    PubMed

    Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert

    2018-01-01

    Home numeracy has been shown to play an important role in children's mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners' home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners ( M age = 5.43 years, SD = 0.29, range: 4.88-6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children's enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts , such as "playing games" and "using numbers in daily life," were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities.

  3. Frequency of Home Numeracy Activities Is Differentially Related to Basic Number Processing and Calculation Skills in Kindergartners

    PubMed Central

    Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert

    2018-01-01

    Home numeracy has been shown to play an important role in children’s mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners’ home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners (Mage = 5.43 years, SD = 0.29, range: 4.88–6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children’s enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts, such as “playing games” and “using numbers in daily life,” were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities. PMID:29623055

  4. Band Structures and Transport Properties of High-Performance Half-Heusler Thermoelectric Materials by First Principles.

    PubMed

    Fang, Teng; Zhao, Xinbing; Zhu, Tiejun

    2018-05-19

    Half-Heusler (HH) compounds, with a valence electron count of 8 or 18, have gained popularity as promising high-temperature thermoelectric (TE) materials due to their excellent electrical properties, robust mechanical capabilities, and good high-temperature thermal stability. With the help of first-principles calculations, great progress has been made in half-Heusler thermoelectric materials. In this review, we summarize some representative theoretical work on band structures and transport properties of HH compounds. We introduce how basic band-structure calculations are used to investigate the atomic disorder in n-type M NiSb ( M = Ti, Zr, Hf) compounds and guide the band engineering to enhance TE performance in p-type Fe R Sb ( R = V, Nb) based systems. The calculations on electrical transport properties, especially the scattering time, and lattice thermal conductivities are also demonstrated. The outlook for future research directions of first-principles calculations on HH TE materials is also discussed.

  5. Band Structures and Transport Properties of High-Performance Half-Heusler Thermoelectric Materials by First Principles

    PubMed Central

    Fang, Teng; Zhao, Xinbing

    2018-01-01

    Half-Heusler (HH) compounds, with a valence electron count of 8 or 18, have gained popularity as promising high-temperature thermoelectric (TE) materials due to their excellent electrical properties, robust mechanical capabilities, and good high-temperature thermal stability. With the help of first-principles calculations, great progress has been made in half-Heusler thermoelectric materials. In this review, we summarize some representative theoretical work on band structures and transport properties of HH compounds. We introduce how basic band-structure calculations are used to investigate the atomic disorder in n-type MNiSb (M = Ti, Zr, Hf) compounds and guide the band engineering to enhance TE performance in p-type FeRSb (R = V, Nb) based systems. The calculations on electrical transport properties, especially the scattering time, and lattice thermal conductivities are also demonstrated. The outlook for future research directions of first-principles calculations on HH TE materials is also discussed. PMID:29783759

  6. Evaluation of RayXpert® for shielding design of medical facilities

    NASA Astrophysics Data System (ADS)

    Derreumaux, Sylvie; Vecchiola, Sophie; Geoffray, Thomas; Etard, Cécile

    2017-09-01

    In a context of growing demands for expert evaluation concerning medical, industrial and research facilities, the French Institute for radiation protection and nuclear safety (IRSN) considered necessary to acquire new software for efficient dimensioning calculations. The selected software is RayXpert®. Before using this software in routine, exposure and transmission calculations for some basic configurations were validated. The validation was performed by the calculation of gamma dose constants and tenth value layers (TVL) for usual shielding materials and for radioisotopes most used in therapy (Ir-192, Co-60 and I-131). Calculated values were compared with results obtained using MCNPX as a reference code and with published values. The impact of different calculation parameters, such as the source emission rays considered for calculation and the use of biasing techniques, was evaluated.

  7. Conversion of Questionnaire Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less

  8. Math anxiety, self-efficacy, and ability in British undergraduate nursing students.

    PubMed

    McMullan, Miriam; Jones, Ray; Lea, Susan

    2012-04-01

    Nurses need to be able to make drug calculations competently. In this study, involving 229 second year British nursing students, we explored the influence of mathematics anxiety, self-efficacy, and numerical ability on drug calculation ability and determined which factors would best predict this skill. Strong significant relationships (p < .001) existed between anxiety, self-efficacy, and ability. Students who failed the numerical and/or drug calculation ability tests were more anxious (p < .001) and less confident (p ≤ .002) in performing calculations than those who passed. Numerical ability made the strongest unique contribution in predicting drug calculation ability (beta = 0.50, p < .001) followed by drug calculation self-efficacy (beta = 0.16, p = .04). Early testing is recommended for basic numerical skills. Faculty are advised to refresh students' numerical skills before introducing drug calculations. Copyright © 2012 Wiley Periodicals, Inc.

  9. A School Experiment in Kinematics: Shooting from a Ballistic Cart

    ERIC Educational Resources Information Center

    Kranjc, T.; Razpet, N.

    2011-01-01

    Many physics textbooks start with kinematics. In the lab, students observe the motions, describe and make predictions, and get acquainted with basic kinematics quantities and their meaning. Then they can perform calculations and compare the results with experimental findings. In this paper we describe an experiment that is not often done, but is…

  10. Stopping Power for Degenerate Electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert

    2016-05-16

    This is a first attempt at calculating the BPS stopping power with electron degeneracy corrections. Section I establishes some notation and basic facts. Section II outlines the basics of the calculation, and in Section III contains some brief notes on how to proceed with the details of the calculation. The remaining work for the calculation starts with Section III.

  11. Mass Properties for Space Systems Standards Development

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey

    2013-01-01

    Current Verbiage in S-120 Applies to Dry Mass. Mass Margin is difference between Required Mass and Predicted Mass. Performance Margin is difference between Predicted Performance and Required Performance. Performance estimates and corresponding margin should be based on Predicted Mass (and other inputs). Contractor Mass Margin reserved from Performance Margin. Remaining performance margin allocated according to mass partials. Compliance can be evaluated effectively by comparison of three areas (preferably on a single sheet). Basic and Predicted Mass (including historical trend). Aggregate potential changes (threats and opportunities) which gives Mass Forecast. Mass Maturity by category (Estimated/Calculated/Actual).

  12. Reexamination of the calculation of two-center, two-electron integrals over Slater-type orbitals. II. Neumann expansion of the exchange integrals

    NASA Astrophysics Data System (ADS)

    Lesiuk, Michał; Moszynski, Robert

    2014-12-01

    In this paper we consider the calculation of two-center exchange integrals over Slater-type orbitals (STOs). We apply the Neumann expansion of the Coulomb interaction potential and consider calculation of all basic quantities which appear in the resulting expression. Analytical closed-form equations for all auxiliary quantities have already been known but they suffer from large digital erosion when some of the parameters are large or small. We derive two differential equations which are obeyed by the most difficult basic integrals. Taking them as a starting point, useful series expansions for small parameter values or asymptotic expansions for large parameter values are systematically derived. The resulting expansions replace the corresponding analytical expressions when the latter introduce significant cancellations. Additionally, we reconsider numerical integration of some necessary quantities and present a new way to calculate the integrand with a controlled precision. All proposed methods are combined to lead to a general, stable algorithm. We perform extensive numerical tests of the introduced expressions to verify their validity and usefulness. Advances reported here provide methodology to compute two-electron exchange integrals over STOs for a broad range of the nonlinear parameters and large angular momenta.

  13. Stratification of complexity in congenital heart surgery: comparative study of the Risk Adjustment for Congenital Heart Surgery (RACHS-1) method, Aristotle basic score and Society of Thoracic Surgeons-European Association for Cardio- Thoracic Surgery (STS-EACTS) mortality score

    PubMed Central

    Cavalcanti, Paulo Ernando Ferraz; Sá, Michel Pompeu Barros de Oliveira; dos Santos, Cecília Andrade; Esmeraldo, Isaac Melo; Chaves, Mariana Leal; Lins, Ricardo Felipe de Albuquerque; Lima, Ricardo de Carvalho

    2015-01-01

    Objective To determine whether stratification of complexity models in congenital heart surgery (RACHS-1, Aristotle basic score and STS-EACTS mortality score) fit to our center and determine the best method of discriminating hospital mortality. Methods Surgical procedures in congenital heart diseases in patients under 18 years of age were allocated to the categories proposed by the stratification of complexity methods currently available. The outcome hospital mortality was calculated for each category from the three models. Statistical analysis was performed to verify whether the categories presented different mortalities. The discriminatory ability of the models was determined by calculating the area under the ROC curve and a comparison between the curves of the three models was performed. Results 360 patients were allocated according to the three methods. There was a statistically significant difference between the mortality categories: RACHS-1 (1) - 1.3%, (2) - 11.4%, (3)-27.3%, (4) - 50 %, (P<0.001); Aristotle basic score (1) - 1.1%, (2) - 12.2%, (3) - 34%, (4) - 64.7%, (P<0.001); and STS-EACTS mortality score (1) - 5.5 %, (2) - 13.6%, (3) - 18.7%, (4) - 35.8%, (P<0.001). The three models had similar accuracy by calculating the area under the ROC curve: RACHS-1- 0.738; STS-EACTS-0.739; Aristotle- 0.766. Conclusion The three models of stratification of complexity currently available in the literature are useful with different mortalities between the proposed categories with similar discriminatory capacity for hospital mortality. PMID:26107445

  14. Performance (Off-Design) Cycle Analysis for a Turbofan Engine With Interstage Turbine Burner

    NASA Technical Reports Server (NTRS)

    Liew, K. H.; Urip, E.; Yang, S. L.; Mattingly, J. D.; Marek, C. J.

    2005-01-01

    This report presents the performance of a steady-state, dual-spool, separate-exhaust turbofan engine, with an interstage turbine burner (ITB) serving as a secondary combustor. The ITB, which is located in the transition duct between the high- and the low-pressure turbines, is a relatively new concept for increasing specific thrust and lowering pollutant emissions in modern jet-engine propulsion. A detailed off-design performance analysis of ITB engines is written in Microsoft(Registered Trademark) Excel (Redmond, Washington) macrocode with Visual Basic Application to calculate engine performances over the entire operating envelope. Several design-point engine cases are pre-selected using a parametric cycle-analysis code developed previously in Microsoft(Registered Trademark) Excel, for off-design analysis. The off-design code calculates engine performances (i.e. thrust and thrust-specific-fuel-consumption) at various flight conditions and throttle settings.

  15. Software and Hardware System for Fast Processes Study When Preparing Foundation Beds of Oil and Gas Facilities

    NASA Astrophysics Data System (ADS)

    Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.

    2018-04-01

    Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.

  16. Progesterone and testosterone studies by neutron scattering and nuclear magnetic resonance methods and quantum chemistry calculations

    NASA Astrophysics Data System (ADS)

    Szyczewski, A.; Hołderna-Natkaniec, K.; Natkaniec, I.

    2004-05-01

    Inelastic incoherent neutron scattering spectra of progesterone and testosterone measured at 20 and 290 K were compared with the IR spectra measured at 290 K. The Phonon Density of States spectra display well resolved peaks of low frequency internal vibration modes up to 1200 cm -1. The quantum chemistry calculations were performed by semiempirical PM3 method and by the density functional theory method with different basic sets for isolated molecule, as well as for the dimer system of testosterone. The proposed assignment of internal vibrations of normal modes enable us to conclude about the sequence of the onset of the torsion movements of the CH 3 groups. These conclusions were correlated with the results of proton molecular dynamics studies performed by NMR method. The GAUSSIAN program had been used for calculations.

  17. Towards a Definition of Basic Numeracy

    ERIC Educational Resources Information Center

    Girling, Michael

    1977-01-01

    The author redefines basic numeracy as the ability to use a four-function calculator sensibly. He then defines "sensibly" and considers the place of algorithms in the scheme of mathematical calculations. (MN)

  18. Calculation of streamflow statistics for Ontario and the Great Lakes states

    USGS Publications Warehouse

    Piggott, Andrew R.; Neff, Brian P.

    2005-01-01

    Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.

  19. Protracted Low-Dose Ionizing Radiation Effects upon Primate Performance

    DTIC Science & Technology

    1977-12-01

    61 G. Dosimetry ................................ ............. 74 NTiS Whife Sectle ) U A N O U C E D JUSTIFICATION...AECL facility. Standard dosimetry techniques were utilized during radiation expo- sur.. In addition, extensive preexposure calibration was conducted...During each of the epochs, the five basic variables were deter- mined. These calculations were accomplished on an analog computer, Electronics Associates

  20. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  1. GENENG 2: A program for calculating design and off-design performance of two- and three-spool turbofans with as many as three nozzles

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.; Koenig, R. W.

    1972-01-01

    A computer program which calculates steady-state design and off-design jet engine performance for two- or three-spool turbofans with one, two, or three nozzles is described. Included in the report are complete FORTRAN 4 listings of the program with sample results for nine basic turbofan engines that can be calculated: (1) three-spool, three-stream engine; (2) two-spool, three-stream, boosted-fan engine; (3) two-spool, three-stream, supercharged-compressor engine; (4) three-spool, two-stream engine; (5) two-spool, two-stream engine; (6) three-spool, three-stream, aft-fan engine; (7) two-spool, three-stream, aft-fan engine; (8) two-spool, two-stream, aft-engine; and (9) three-spool, two-stream, aft-fan engine. The simulation of other engines by using logical variables built into the program is also described.

  2. The EUCLID/V1 Integrated Code for Safety Assessment of Liquid Metal Cooled Fast Reactors. Part 1: Basic Models

    NASA Astrophysics Data System (ADS)

    Mosunova, N. A.

    2018-05-01

    The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.

  3. A computational study on the electronic and nonlinear optical properties of graphyne subunit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahat, Mehmet, E-mail: bahat@gazi.edu.tr; Güney, Merve Nurhan, E-mail: merveng87@gmail.com; Özbay, Akif, E-mail: aozbay@gazi.edu.tr

    2016-03-25

    After discovery of graphene, it has been considered as basic material for the future nanoelectronic devices. Graphyne is a two- dimensional carbon allotropes as graphene which expected that its electronic properties is potentialy superior to graphene. The compound C{sub 24}H{sub 12} (tribenzocyclyne; TBC) is a substructure of graphyne. The electronic, and nonlinear optical properties of the C{sub 24}H{sub 12} and its some fluoro derivatives were calculated. The calculated properties are electric dipole moment, the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) energies, polarizability and first hyperpolarizability. All calculations were performed at the B3LYP/6-31+G(d,p) level.

  4. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    NASA Astrophysics Data System (ADS)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  5. Effects of photographic distance on tree crown atributes calculated using urbancrowns image analysis software

    Treesearch

    Mason F. Patterson; P. Eric Wiseman; Matthew F. Winn; Sang-mook Lee; Philip A. Araman

    2011-01-01

    UrbanCrowns is a software program developed by the USDA Forest Service that computes crown attributes using a side-view digital photograph and a few basic field measurements. From an operational standpoint, it is not known how well the software performs under varying photographic conditions for trees of diverse size, which could impact measurement reproducibility and...

  6. Establishment and verification of three-dimensional dynamic model for heavy-haul train-track coupled system

    NASA Astrophysics Data System (ADS)

    Liu, Pengfei; Zhai, Wanming; Wang, Kaiyun

    2016-11-01

    For the long heavy-haul train, the basic principles of the inter-vehicle interaction and train-track dynamic interaction are analysed firstly. Based on the theories of train longitudinal dynamics and vehicle-track coupled dynamics, a three-dimensional (3-D) dynamic model of the heavy-haul train-track coupled system is established through a modularised method. Specifically, this model includes the subsystems such as the train control, the vehicle, the wheel-rail relation and the line geometries. And for the calculation of the wheel-rail interaction force under the driving or braking conditions, the large creep phenomenon that may occur within the wheel-rail contact patch is considered. For the coupler and draft gear system, the coupler forces in three directions and the coupler lateral tilt angles in curves are calculated. Then, according to the characteristics of the long heavy-haul train, an efficient solving method is developed to improve the computational efficiency for such a large system. Some basic principles which should be followed in order to meet the requirement of calculation accuracy are determined. Finally, the 3-D train-track coupled model is verified by comparing the calculated results with the running test results. It is indicated that the proposed dynamic model could simulate the dynamic performance of the heavy-haul train well.

  7. On a thermal analysis of a second stripper for rare isotope accelerator.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Momozaki, Y.; Nolen, J.; Nuclear Engineering Division

    2008-08-04

    This memo summarizes simple calculations and results of the thermal analysis on the second stripper to be used in the driver linac of Rare Isotope Accelerator (RIA). Both liquid (Sodium) and solid (Titanium and Vanadium) stripper concepts were considered. These calculations were intended to provide basic information to evaluate the feasibility of liquid (thick film) and solid (rotating wheel) second strippers. Nuclear physics calculations to estimate the volumetric heat generation in the stripper material were performed by 'LISE for Excel'. In the thermal calculations, the strippers were modeled as a thin 2D plate with uniform heat generation within the beammore » spot. Then, temperature distributions were computed by assuming that the heat spreads conductively in the plate in radial direction without radiative heat losses to surroundings.« less

  8. Calculating Equilibrium Constants in the SnCl2-H2O-NaOH System According to Potentiometric Titration Data

    NASA Astrophysics Data System (ADS)

    Maskaeva, L. N.; Fedorova, E. A.; Yusupov, R. A.; Markov, V. F.

    2018-05-01

    The potentiometric titration of tin chloride SnCl2 is performed in the concentration range of 0.00009-1.1 mol/L with a solution of sodium hydroxide NaOH. According to potentiometric titration data based on modeling equilibria in the SnCl2-H2O-NaOH system, basic equations are generated for the main processes, and instability constants are calculated for the resulting hydroxo complexes and equilibrium constants of low-soluble tin(II) compounds. The data will be of interest for specialists in the field of theory of solutions.

  9. Monte Carlo simulation of the nuclear-electromagnetic cascade development and the energy response of ionization spectrometers

    NASA Technical Reports Server (NTRS)

    Jones, W. V.

    1973-01-01

    Modifications to the basic computer program for performing the simulations are reported. The major changes include: (1) extension of the calculations to include the development of cascades initiated by heavy nuclei, (2) improved treatment of the nuclear disintegrations which occur during the interactions of hadrons in heavy absorbers, (3) incorporation of accurate multi-pion final-state cross sections for various interactions at accelerator energies, (4) restructuring of the program logic so that calculations can be made for sandwich-type detectors, and (5) logic modifications related to execution of the program.

  10. Calculation and measurement of the influence of flow parameters on rotordynamic coefficients in labyrinth seals

    NASA Technical Reports Server (NTRS)

    Kwanka, K.; Ortinger, W.; Steckel, J.

    1994-01-01

    First experimental investigations performed on a new test rig are presented. For a staggered labyrinth seal with fourteen cavities the stiffness coefficient and the leakage flow are measured. The experimental results are compared to calculated results which are obtained by a one-volume bulk-flow theory. A perturbation analysis is made for seven terms. It is found out that the friction factors have great impact on the dynamic coefficients. They are obtained by turbulent flow computation by a finite-volume model with the Reynolds equations used as basic equations.

  11. Visual Basic programs for spreadsheet analysis.

    PubMed

    Hunt, Bruce

    2005-01-01

    A collection of Visual Basic programs, entitled Function.xls, has been written for ground water spreadsheet calculations. This collection includes programs for calculating mathematical functions and for evaluating analytical solutions in ground water hydraulics and contaminant transport. Several spreadsheet examples are given to illustrate their use.

  12. The cost of preoperative urodynamics: A secondary analysis of the ValUE trial.

    PubMed

    Norton, Peggy A; Nager, Charles W; Brubaker, Linda; Lemack, Gary E; Sirls, Larry T; Holley, Robert; Chai, Toby C; Kraus, Stephen R; Zyczynski, Halina; Smith, Bridget; Stoddard, Anne

    2016-01-01

    Urodynamic studies (UDS) are generally recommended prior to surgical treatment for stress urinary incontinence (SUI), despite insufficient evidence that it impacts treatment plans or outcomes in patients with uncomplicated SUI. This analysis aimed to calculate the cost incurred when UDS was performed as a supplement to a basic office evaluation and to extrapolate the potential savings of not doing UDS in this patient population on a national basis. This is a secondary analysis from the Value of Urodynamic Evaluation (ValUE) trial, a multicenter non-inferiority randomized trial to determine whether a basic office evaluation (OE) is non-inferior in terms of SUI surgery outcomes to office evaluation with addition of urodynamic studies (UDS). All participants underwent an OE; those patients who randomized to supplementary UDS underwent non-instrumented uroflowmetry, filling cystometry, and a pressure flow study. Costs associated with UDS were calculated using 2014 U.S. Medicare allowable fees. Models using various patient populations and payor mixes were created to obtain a range of potential costs of performing UDS in patients undergoing SUI surgery annually in the United States. Six hundred thirty women were randomized to OE or OE plus UDS. There was no difference in surgical outcomes between the two groups. The per patient cost of UDS varied from site to site, and included complex cystometrogram $314-$343 (CPT codes 51728-51729) plus complex uroflowmetry $16 (CPT code 51741). Extrapolating these costs for US women similar to our study population, 13-33 million US dollars could be saved annually by not performing preoperative urodynamics. For women with uncomplicated SUI and a confirmatory preoperative basic office evaluation, tens of millions of dollars US could be saved annually by not performing urodynamic testing. In the management of such women, eliminating this preoperative test has a major economic benefit. © 2014 Wiley Periodicals, Inc.

  13. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    NASA Astrophysics Data System (ADS)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  14. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  15. A magnetic and electronic circular dichroism study of azurin, plastocyanin, cucumber basic protein, and nitrite reductase based on time-dependent density functional theory calculations.

    PubMed

    Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2010-06-03

    The excitation, circular dichroism, magnetic circular dichroism (MCD) and electron paramagnetic resonance (EPR) spectra of small models of four blue copper proteins are simulated on the TDDFT/BP86 level. X-Ray diffraction geometries are used for the modeling of the blue copper sites in azurin, plastocyanin, cucumber basic protein, and nitrite reductase. Comparison with experimental data reveals that the calculations reproduce most of the qualitative trends of the observed experimental spectra with some discrepancies in the orbital decompositions and the values of the excitation energies, the g( parallel) components of the g tensor, and the components of the A tensor. These discrepancies are discussed relative to deficiencies in the time-dependent density functional theory (TDDFT) methodology, as opposed to previous studies which address them as a result of insufficient model size or poor performance of the BP86 functional. In addition, attempts are made to elucidate the correlation between the MCD and EPR signals.

  16. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  17. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  18. Numerical studies on sizing/ rating of plate fin heat exchangers for a modified Claude cycle based helium liquefier/ refrigerator

    NASA Astrophysics Data System (ADS)

    Goyal, M.; Chakravarty, A.; Atrey, M. D.

    2017-02-01

    Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.

  19. Analytical Study of 90Sr Betavoltaic Nuclear Battery Performance Based on p-n Junction Silicon

    NASA Astrophysics Data System (ADS)

    Rahastama, Swastya; Waris, Abdul

    2016-08-01

    Previously, an analytical calculation of 63Ni p-n junction betavoltaic battery has been published. As the basic approach, we reproduced the analytical simulation of 63Ni betavoltaic battery and then compared it to previous results using the same design of the battery. Furthermore, we calculated its maximum power output and radiation- electricity conversion efficiency using semiconductor analysis method.Then, the same method were applied to calculate and analyse the performance of 90Sr betavoltaic battery. The aim of this project is to compare the analytical perfomance results of 90Sr betavoltaic battery to 63Ni betavoltaic battery and the source activity influences to performance. Since it has a higher power density, 90Sr betavoltaic battery yields more power than 63Ni betavoltaic battery but less radiation-electricity conversion efficiency. However, beta particles emitted from 90Sr source could travel further inside the silicon corresponding to stopping range of beta particles, thus the 90Sr betavoltaic battery could be designed thicker than 63Ni betavoltaic battery to achieve higher conversion efficiency.

  20. Basic requirements for a 1000-MW(electric) class tokamak fusion-fission hybrid reactor and its blanket concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayama, Ariyoshi; Ogasawara, Masatada; Yamauchi, Michinori

    1994-08-01

    Plasma size and other basic performance parameters for 1000-MW(electric) power production are calculated with the blanket energy multiplication factor, the M value, as a parameter. The calculational model is base don the International Thermonuclear Experimental Reactor (ITER) physics design guidelines and includes overall plant power flow. Plasma size decreases as the M value increases. However, the improvement in the plasma compactness and other basic performance parameters, such as the total plant power efficiency, becomes saturated above the M = 5 to 7 range. THus, a value in the M = 5 to 7 range is a reasonable choice for 1000-MW(electric)more » hybrids. Typical plasma parameters for 1000-MW(electric) hybrids with a value of M = 7 are a major radius of R = 5.2 m, minor radius of a = 1.7 m, plasma current of I{sub p} = 15 MA, and toroidal field on the axis of B{sub o} = 5 T. The concept of a thermal fission blanket that uses light water as a coolant is selected as an attractive candidate for electricity-producing hybrids. An optimization study is carried out for this blanket concept. The result shows that a compact, simple structure with a uniform fuel composition for the fissile region is sufficient to obtain optimal conditions for suppressing the thermal power increase caused by fuel burnup. The maximum increase in the thermal power is +3.2%. The M value estimated from the neutronics calculations is {approximately}7.0, which is confirmed to be compatible with the plasma requirement. These studies show that it is possible to use a tokamak fusion core with design requirements similar to those of ITER for a 1000-MW(electric) power reactor that uses existing thermal reactor technology for the blanket. 30 refs., 22 figs., 4 tabs.« less

  1. Model for Vortex Ring State Influence on Rotorcraft Flight Dynamics

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2005-01-01

    The influence of vortex ring state (VRS) on rotorcraft flight dynamics is investigated, specifically the vertical velocity drop of helicopters and the roll-off of tiltrotors encountering VRS. The available wind tunnel and flight test data for rotors in vortex ring state are reviewed. Test data for axial flow, non-axial flow, two rotors, unsteadiness, and vortex ring state boundaries are described and discussed. Based on the available measured data, a VRS model is developed. The VRS model is a parametric extension of momentum theory for calculation of the mean inflow of a rotor, hence suitable for simple calculations and real-time simulations. This inflow model is primarily defined in terms of the stability boundary of the aircraft motion. Calculations of helicopter response during VRS encounter were performed, and good correlation is shown with the vertical velocity drop measured in flight tests. Calculations of tiltrotor response during VRS encounter were performed, showing the roll-off behavior characteristic of tiltrotors. Hence it is possible, using a model of the mean inflow of an isolated rotor, to explain the basic behavior of both helicopters and tiltrotors in vortex ring state.

  2. [Effects of sexual maturation on body composition, dermatoglyphics, somatotype and basic physical qualities of adolescents].

    PubMed

    Linhares, Renato Vidal; Matta, Marcelo de Oliveira; Lima, Jorge R P; Dantas, Paulo M Silva; Costa, Mônica Barros; Fernandes Filho, José

    2009-02-01

    Describe the characteristics of body composition, somatotype, basic physical qualities, dermatoglyphics and bone age regarding sexual maturation stages of boys. A transversal study was carried out in 136 boys, between 10 and 14 years of age. Clinical assessment, physical examination and radiography of wrists and hands to calculate bone age were performed. A tendency of increasing total body mass, stature, body mass index, body bone diameters and muscle circumferences and basic physical qualities was found with the advancing of puberty. No differences were found in dermatoglyphics and somatotype between different stages of puberty maturation. Due to the changes in important parameters of physical training that occur during puberty, it can be concluded that the selection of children and adolescents for sport training and competitions should be based not only on chronological age but also, and mainly on sexual maturation, for better physical assessment and appropriate training for this population.

  3. Variable displacement alpha-type Stirling engine

    NASA Astrophysics Data System (ADS)

    Homutescu, V. M.; Bălănescu, D. T.; Panaite, C. E.; Atanasiu, M. V.

    2016-08-01

    The basic design and construction of an alpha-type Stirling engine with on load variable displacement is presented. The variable displacement is obtained through a planar quadrilateral linkage with one on load movable ground link. The physico-mathematical model used for analyzing the variable displacement alpha-type Stirling engine behavior is an isothermal model that takes into account the real movement of the pistons. Performances and power adjustment capabilities of such alpha-type Stirling engine are calculated and analyzed. An exemplification through the use of the numerical simulation was performed in this regard.

  4. Analysis of recruitment and industrial human resources management for optimal productivity in the presence of the HIV/AIDS epidemic.

    PubMed

    Okosun, Kazeem O; Makinde, Oluwole D; Takaidza, Isaac

    2013-01-01

    The aim of this paper is to analyze the recruitment effects of susceptible and infected individuals in order to assess the productivity of an organizational labor force in the presence of HIV/AIDS with preventive and HAART treatment measures in enhancing the workforce output. We consider constant controls as well as time-dependent controls. In the constant control case, we calculate the basic reproduction number and investigate the existence and stability of equilibria. The model is found to exhibit backward and Hopf bifurcations, implying that for the disease to be eradicated, the basic reproductive number must be below a critical value of less than one. We also investigate, by calculating sensitivity indices, the sensitivity of the basic reproductive number to the model's parameters. In the time-dependent control case, we use Pontryagin's maximum principle to derive necessary conditions for the optimal control of the disease. Finally, numerical simulations are performed to illustrate the analytical results. The cost-effectiveness analysis results show that optimal efforts on recruitment (HIV screening of applicants, etc.) is not the most cost-effective strategy to enhance productivity in the organizational labor force. Hence, to enhance employees' productivity, effective education programs and strict adherence to preventive measures should be promoted.

  5. Does the Aristotle Score predict outcome in congenital heart surgery?

    PubMed

    Kang, Nicholas; Tsang, Victor T; Elliott, Martin J; de Leval, Marc R; Cole, Timothy J

    2006-06-01

    The Aristotle Score has been proposed as a measure of 'complexity' in congenital heart surgery, and a tool for comparing performance amongst different centres. To date, however, it remains unvalidated. We examined whether the Basic Aristotle Score was a useful predictor of mortality following open-heart surgery, and compared it to the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system. We also examined the ability of the Aristotle Score to measure performance. The Basic Aristotle Score and RACHS-1 risk categories were assigned retrospectively to 1085 operations involving cardiopulmonary bypass in children less than 18 years of age. Multiple logistic regression analysis was used to determine the significance of the Aristotle Score and RACHS-1 category as independent predictors of in-hospital mortality. Operative performance was calculated using the Aristotle equation: performance = complexity x survival. Multiple logistic regression identified RACHS-1 category to be a powerful predictor of mortality (Wald 17.7, p < 0.0001), whereas Aristotle Score was only weakly associated with mortality (Wald 4.8, p = 0.03). Age at operation and bypass time were also highly significant predictors of postoperative death (Wald 13.7 and 33.8, respectively, p < 0.0001 for both). Operative performance was measured at 7.52 units. The Basic Aristotle Score was only weakly associated with postoperative mortality in this series. Operative performance appeared to be inflated by the fact that the overall complexity of cases was relatively high in this series. An alternative equation (performance = complexity/mortality) is proposed as a fairer and more logical method of risk-adjustment.

  6. Preliminary results of the calculated and experimental studies of the basic aerothermodynamic parameters of the ExoMars landing module

    NASA Astrophysics Data System (ADS)

    Finchenko, V. S.; Ivankov, A. A.; Shmatov, S. I.; Mordvinkin, A. S.

    2015-12-01

    The article presents the initial data for the ExoMars landing module aerothermodynamic calculations, used calculation methods, the calculation results of aerodynamic characteristics of the landing module shape and structural parameters of thermal protection selected during the conceptual design phase. Also, the test results of the destruction of the thermal protection material and comparison of the basic characteristics of the landing module with a front shield in the form of a cone and a spherical segment are presented.

  7. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  8. Computer-aided linear-circuit design.

    NASA Technical Reports Server (NTRS)

    Penfield, P.

    1971-01-01

    Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.

  9. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  10. Estimating Basic Preliminary Design Performances of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.; Alexander, Reginald

    2004-01-01

    Aerodynamics and Performance Estimation Toolset is a collection of four software programs for rapidly estimating the preliminary design performance of aerospace vehicles represented by doing simplified calculations based on ballistic trajectories, the ideal rocket equation, and supersonic wedges through standard atmosphere. The program consists of a set of Microsoft Excel worksheet subprograms. The input and output data are presented in a user-friendly format, and calculations are performed rapidly enough that the user can iterate among different trajectories and/or shapes to perform "what-if" studies. Estimates that can be computed by these programs include: 1. Ballistic trajectories as a function of departure angles, initial velocities, initial positions, and target altitudes; assuming point masses and no atmosphere. The program plots the trajectory in two-dimensions and outputs the position, pitch, and velocity along the trajectory. 2. The "Rocket Equation" program calculates and plots the trade space for a vehicle s propellant mass fraction over a range of specific impulse and mission velocity values, propellant mass fractions as functions of specific impulses and velocities. 3. "Standard Atmosphere" will estimate the temperature, speed of sound, pressure, and air density as a function of altitude in a standard atmosphere, properties of a standard atmosphere as functions of altitude. 4. "Supersonic Wedges" will calculate the free-stream, normal-shock, oblique-shock, and isentropic flow properties for a wedge-shaped body flying supersonically through a standard atmosphere. It will also calculate the maximum angle for which a shock remains attached, and the minimum Mach number for which a shock becomes attached, all as functions of the wedge angle, altitude, and Mach number.

  11. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  12. Combining Basic Business Math and Electronic Calculators.

    ERIC Educational Resources Information Center

    Merchant, Ronald

    As a means of alleviating math anxiety among business students and of improving their business machine skills, Spokane Falls Community College offers a course in which basic business math skills are mastered through the use of desk top calculators. The self-paced course, which accommodates varying student skill levels, requires students to: (1)…

  13. Basic Mathematics Machine Calculator Course.

    ERIC Educational Resources Information Center

    Windsor Public Schools, CT.

    This series of four text-workbooks was designed for tenth grade mathematics students who have exhibited lack of problem-solving skills. Electric desk calculators are to be used with the text. In the first five chapters of the series, students learn how to use the machine while reviewing basic operations with whole numbers, decimals, fractions, and…

  14. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  15. GRAPE project

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    2002-12-01

    We overview our GRAvity PipE (GRAPE) project to develop special-purpose computers for astrophysical N-body simulations. The basic idea of GRAPE is to attach a custom-build computer dedicated to the calculation of gravitational interaction between particles to a general-purpose programmable computer. By this hybrid architecture, we can achieve both a wide range of applications and very high peak performance. Our newest machine, GRAPE-6, achieved the peak speed of 32 Tflops, and sustained performance of 11.55 Tflops, for the total budget of about 4 million USD. We also discuss relative advantages of special-purpose and general-purpose computers and the future of high-performance computing for science and technology.

  16. An Overview of Demise Calculations, Conceptual Design Studies, and Hydrazine Compatibility Testing for the GPM Core Spacecraft Propellant Tank

    NASA Technical Reports Server (NTRS)

    Estes, Robert H.; Moore, N. R.

    2007-01-01

    NASA's Global Precipitation Measurement (GPM) mission is an ongoing Goddard Space Flight Center (GSFC) project whose basic objective is to improve global precipitation measurements. It has been decided that the GPM spacecraft is to be a "design for demise" spacecraft. This requirement resulted in the need for a propellant tank that would also demise or ablate to an appropriate degree upon re-entry. This paper will describe GSFC-performed spacecraft and tankage demise analyses, vendor conceptual design studies, and vendor performed hydrazine compatibility and wettability tests performed on 6061 and 2219 aluminum alloys.

  17. The Test of Logical Thinking as a predictor of first-year pharmacy students' performance in required first-year courses.

    PubMed

    Etzler, Frank M; Madden, Michael

    2014-08-15

    To investigate the correlation of scores on the Test of Logical Thinking (TOLT) with first-year pharmacy students' performance in selected courses. The TOLT was administered to 130 first-year pharmacy students. The examination was administered during the first quarter in a single session. The TOLT scores correlated with grades earned in Pharmaceutical Calculations, Physical Pharmacy, and Basic Pharmacokinetics courses. Performance on the TOLT has been correlated to performance in courses that required the ability to use quantitative reasoning to complete required tasks. In the future, it may be possible to recommend remediation, retention, and/or admission based in part on the results from the TOLT.

  18. Theoretical and experimental studies on vibrational and nonlinear optic properties of guanidinium 3-nitrobenzoate. Differences and similarity between guanidinium 3-nitrobenzoate and guanidinium 4-nitrobenzoate complexes

    NASA Astrophysics Data System (ADS)

    Drozd, Marek

    2018-03-01

    According to literature data two structures of guanidine with nitrobenzoic acids are known. For guanidinium 4-nitrobenzoate the detailed studies of X-ray structure, vibrational and theoretical properties were performed. This compound was classified as second harmonic generator with efficiency of 3.3 times that KDP, standard crystal. On the contrary to mentioned above results for the guanidinium 3-nitrobenzoate the basic X-ray diffraction study was performed, only. On the basis of established crystallographic results, the detailed investigation of geometry and vibrational properties were made on the basis of theoretical calculation. According to this data the equilibrium geometry of investigated molecule was established. On the basis of this calculation the detailed computational studies of vibrational properties were performed. The theoretical IR and Raman frequencies, intensities and PED analysis are presented. Additionally, the NBO charges, HOMO and LUMO shapes and NLO properties of titled crystal were calculated. On the basis of these results the crystal was classified as second order generator in NLO but with bigger efficiency that guanidinium 4-nitorobenzoate compound. The obtained data are compared with experimental crystallographic and vibrational results for real crystal of guanidinium 3-nitrobenzoate. Additionally, the theoretical vibrational spectra are compared with literature calculations of guanidinium 4-nitrobenzoate compound.

  19. GW Calculations of Materials on the Intel Xeon-Phi Architecture

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.

    Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).

  20. Aerodynamic performances of three fan stator designs operating with rotor having tip speed of 337 meters per second and pressure ratio of 1.54. Relation of analytical code calculations to experimental performance

    NASA Technical Reports Server (NTRS)

    Gelder, T. F.; Schmidt, J. F.; Esgar, G. M.

    1980-01-01

    A hub-to-shroud and a blade-to-blade internal-flow analysis code, both inviscid and basically subsonic, were used to calculate the flow parameters within four stator-blade rows. The produced ratios of maximum suction-surface velocity to trailing-edge velocity correlated well in the midspan region, with the measured total-parameters over the minimum-loss to near stall operating range for all stators and speeds studied. The potential benefits of a blade designed with the aid of these flow analysis codes are illustrated by a proposed redesign of one of the four stators studied. An overall efficiency improvement of 1.6 points above the peak measured for that stator is predicted for the redesign.

  1. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  2. DYNGEN: A program for calculating steady-state and transient performance of turbojet and turbofan engines

    NASA Technical Reports Server (NTRS)

    Sellers, J. F.; Daniele, C. J.

    1975-01-01

    The DYNGEN, a digital computer program for analyzing the steady state and transient performance of turbojet and turbofan engines, is described. The DYNGEN is based on earlier computer codes (SMOTE, GENENG, and GENENG 2) which are capable of calculating the steady state performance of turbojet and turbofan engines at design and off-design operating conditions. The DYNGEN has the combined capabilities of GENENG and GENENG 2 for calculating steady state performance; to these the further capability for calculating transient performance was added. The DYNGEN can be used to analyze one- and two-spool turbojet engines or two- and three-spool turbofan engines without modification to the basic program. A modified Euler method is used by DYNGEN to solve the differential equations which model the dynamics of the engine. This new method frees the programmer from having to minimize the number of equations which require iterative solution. As a result, some of the approximations normally used in transient engine simulations can be eliminated. This tends to produce better agreement when answers are compared with those from purely steady state simulations. The modified Euler method also permits the user to specify large time steps (about 0.10 sec) to be used in the solution of the differential equations. This saves computer execution time when long transients are run. Examples of the use of the program are included, and program results are compared with those from an existing hybrid-computer simulation of a two-spool turbofan.

  3. High-speed assembly language (80386/80387) programming for laser spectra scan control and data acquisition providing improved resolution water vapor spectroscopy

    NASA Technical Reports Server (NTRS)

    Allen, Robert J.

    1988-01-01

    An assembly language program using the Intel 80386 CPU and 80387 math co-processor chips was written to increase the speed of data gathering and processing, and provide control of a scanning CW ring dye laser system. This laser system is used in high resolution (better than 0.001 cm-1) water vapor spectroscopy experiments. Laser beam power is sensed at the input and output of white cells and the output of a Fabry-Perot. The assembly language subroutine is called from Basic, acquires the data and performs various calculations at rates greater than 150 faster than could be performed by the higher level language. The width of output control pulses generated in assembly language are 3 to 4 microsecs as compared to 2 to 3.7 millisecs for those generated in Basic (about 500 to 1000 times faster). Included are a block diagram and brief description of the spectroscopy experiment, a flow diagram of the Basic and assembly language programs, listing of the programs, scope photographs of the computer generated 5-volt pulses used for control and timing analysis, and representative water spectrum curves obtained using these programs.

  4. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.

    1994-01-01

    Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.

  5. As-Built documentation of programs to implement the Robertson and Doraiswamy/Thompson models

    NASA Technical Reports Server (NTRS)

    Valenziano, D. J. (Principal Investigator)

    1981-01-01

    The software which implements two spring wheat phenology models is described. The main program routines for the Doraiswamy/Thompson crop phenology model and the basic Robertson crop phenology model are DTMAIN and BRMAIN. These routines read meteorological data files and coefficient files, accept the planting date information and other information from the user, and initiate processing. Daily processing for the basic Robertson program consists only of calculation of the basic Robertson increment of crop development. Additional processing in the Doraiswamy/Thompson program includes the calculation of a moisture stress index and correction of the basic increment of development. Output for both consists of listings of the daily results.

  6. Understanding the Microphysical Properties of Developing Cloud Clusters during TCS-08

    DTIC Science & Technology

    2011-09-30

    resolution (1.67-km) sensitivity simulations have been performed using Typhoon Mawar (2005) from the western North Pacific to demonstrate considerable...cloud-resolving) scheme is used in the model. Initial calculations of some basic cloud properties from infrared imagery for Typhoon Mawar indicate that...Figure 4: Intensity traces of simulated Typhoon Mawar (2005) showing sea-level pressure on the left axis and maximum wind speed on the right axis

  7. Microwave noise temperature and attenuation of clouds - Statistics of these effects at various sites in the United States, Alaska, and Hawaii

    NASA Technical Reports Server (NTRS)

    Slobin, S. D.

    1982-01-01

    The microwave attenuation and noise temperature effects of clouds can result in serious degradation of telecommunications link performance, especially for low-noise systems presently used in deep-space communications. Although cloud effects are generally less than rain effects, the frequent presence of clouds will cause some amount of link degradation a large portion of the time. This paper presents a general review of cloud types and their water particle densities, attenuation and noise temperature calculations, and basic link signal-to-noise ratio calculations. Tabular results of calculations for 12 different cloud models are presented for frequencies in the range 10-50 GHz. Curves of average-year attenuation and noise temperature statistics at frequencies ranging from 10 to 90 GHz, calculated from actual surface and radiosonde observations, are given for 15 climatologically distinct regions in the contiguous United States, Alaska, and Hawaii. Nonuniform sky cover is considered in these calculations.

  8. Study on the variable cycle engine modeling techniques based on the component method

    NASA Astrophysics Data System (ADS)

    Zhang, Lihua; Xue, Hui; Bao, Yuhai; Li, Jijun; Yan, Lan

    2016-01-01

    Based on the structure platform of the gas turbine engine, the components of variable cycle engine were simulated by using the component method. The mathematical model of nonlinear equations correspondeing to each component of the gas turbine engine was established. Based on Matlab programming, the nonlinear equations were solved by using Newton-Raphson steady-state algorithm, and the performance of the components for engine was calculated. The numerical simulation results showed that the model bulit can describe the basic performance of the gas turbine engine, which verified the validity of the model.

  9. Luminol modified polycarbazole and poly(o-anisidine): Theoretical insights compared with experimental data.

    PubMed

    Jadoun, Sapana; Verma, Anurakshee; Riaz, Ufana

    2018-06-07

    With the aim to explore the effect of luminol as a multifunctional dopant for conjugated polymers, the present study reports the ultrasound-assisted doping of polycarbazole (PCz) and poly(o-anisidine) (PAnis) with luminol in basic, acidic and neutral media. The synthesized homopolymers and luminol doped polymers were characterized using FT-IR, UV-visible and XRD studies while the photo-physical properties were investigated via fluorescence spectroscopy. Density functional theory (DFT) calculations were performed to get insights into the structural, optical, and electronic properties of homopolymers of polycarbazole (PCz) and poly(o-anisidine) (PAnis). Vibrational bands B3LYP/6-311G (d,p) level, UV-vis spectral bands and electronic properties such as ionization potentials (IP), electron affinities (EA) and HOMO-LUMO band gap energies of the homopolymers and doped polymers were calculated and compared. Results revealed that luminol doped polymers showed different photo-physical characteristics in acidic, basic and neutral media which could be tuned to obtain near infrared (NIR) emitting polymers. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Indicator methods to evaluate the hygienic performance of industrial scale operating Biowaste Composting Plants.

    PubMed

    Martens, Jürgen

    2005-01-01

    The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.

  11. Adsorption of 6-mercaptopurine and 6-mercaptopurine riboside on silver colloid: a pH dependent surface enhanced Raman spectroscopy and density functional theory study. Part I. 6-Mercaptopurine

    NASA Astrophysics Data System (ADS)

    Szeghalmi, A. V.; Leopold, L.; Pînzaru, S.; Chis, V.; Silaghi-Dumitrescu, I.; Schmitt, M.; Popp, J.; Kiefer, W.

    2005-02-01

    Surface enhanced Raman spectroscopy (SERS) on silver colloid has been applied to characterize the interaction of 6-mercaptopurine (6MP), an active drug used in chemotherapy of acute lymphoblastic leukemia, with a model biological substrate at therapeutical concentrations and as function of the pH value. The adsorption active sites and molecular orientation on the metal surface have been determined on the basis of SERS 'surface selection rules' subsequent to a detailed vibrational analysis of the 6MP tautomeric forms. Therefore, DFT calculations (vibrational wavenumbers, Raman scattering activities, partial atomic charges) of the optimized tautomers and potential energy distribution calculations have been performed. Around neutral pH value reorientation of the molecule has been observed. Under basic conditions the 6MP molecule is probably adsorbed on the silver colloid through the N1 atom of the purine ring and possibly the S atom, and adopts a tilted orientation to the surface. A reduction in the number of adsorbed molecules under basic conditions is proposed, since the SERS spectrum recorded at 10-6 M concentration at neutral pH value resembles the SERS spectra obtained under basic conditions at 10-5 M concentration. At acidic pH values a stronger interaction through the N9 and N3 atoms is suggested with an end-on orientation.

  12. Calculating with light using a chip-scale all-optical abacus.

    PubMed

    Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P

    2017-11-02

    Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.

  13. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  14. Structural response calculations for a reverse ballistics test of an earth penetrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, D.F.; Goudreau, G.L.

    1976-08-01

    A dynamic response calculation has been performed on a half-scale earth penetrator to be tested on a reverse ballistics test in Aug. 1976. In this test a 14 in. dia sandstone target is fired at the EP at 1800 ft/sec at normal impact. Basically two types of calculations were made. The first utilized an axisymmetric, finite element code DTVIS2 in the dynamic mode and with materials having linear elastic properties. CRT's radial and axial force histories were smoothed to eliminate grid encounter frequency and applied to the nodal points along the nose of the penetrator. Given these inputs DTVIS2 thenmore » calculated the internal dynamic response. Secondly, SAP4, a structural analysis code, is utilized to calculate axial frequencies and mode shapes of the structure. A special one dimensional display facilitates interpretation of the mode shape. DTVIS2 and SAP4 use a common mesh description. Special considerations in the calculation are the assessment of the effect of gaps and preload and the internal axial sliding of components.« less

  15. GPU computing in medical physics: a review.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-05-01

    The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.

  16. Vector 33: A reduce program for vector algebra and calculus in orthogonal curvilinear coordinates

    NASA Astrophysics Data System (ADS)

    Harper, David

    1989-06-01

    This paper describes a package with enables REDUCE 3.3 to perform algebra and calculus operations upon vectors. Basic algebraic operations between vectors and between scalars and vectors are provided, including scalar (dot) product and vector (cross) product. The vector differential operators curl, divergence, gradient and Laplacian are also defined, and are valid in any orthogonal curvilinear coordinate system. The package is written in RLISP to allow algebra and calculus to be performed using notation identical to that for operations. Scalars and vectors can be mixed quite freely in the same expression. The package will be of interest to mathematicians, engineers and scientists who need to perform vector calculations in orthogonal curvilinear coordinates.

  17. Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang

    2017-12-01

    This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braiman, Yehuda; Neschke, Brendan; Nair, Niketh S.

    Here, we study memory states of a circuit consisting of a small inductively coupled Josephson junction array and introduce basic (write, read, and reset) memory operations logics of the circuit. The presented memory operation paradigm is fundamentally different from conventional single quantum flux operation logics. We calculate stability diagrams of the zero-voltage states and outline memory states of the circuit. We also calculate access times and access energies for basic memory operations.

  19. Validation of Calculations in a Digital Thermometer Firmware

    NASA Astrophysics Data System (ADS)

    Batagelj, V.; Miklavec, A.; Bojkovski, J.

    2014-04-01

    State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.

  20. SigrafW: An easy-to-use program for fitting enzyme kinetic data.

    PubMed

    Leone, Francisco Assis; Baranauskas, José Augusto; Furriel, Rosa Prazeres Melo; Borin, Ivana Aparecida

    2005-11-01

    SigrafW is Windows-compatible software developed using the Microsoft® Visual Basic Studio program that uses the simplified Hill equation for fitting kinetic data from allosteric and Michaelian enzymes. SigrafW uses a modified Fibonacci search to calculate maximal velocity (V), the Hill coefficient (n), and the enzyme-substrate apparent dissociation constant (K). The estimation of V, K, and the sum of the squares of residuals is performed using a Wilkinson nonlinear regression at any Hill coefficient (n). In contrast to many currently available kinetic analysis programs, SigrafW shows several advantages for the determination of kinetic parameters of both hyperbolic and nonhyperbolic saturation curves. No initial estimates of the kinetic parameters are required, a measure of the goodness-of-the-fit for each calculation performed is provided, the nonlinear regression used for calculations eliminates the statistical bias inherent in linear transformations, and the software can be used for enzyme kinetic simulations either for educational or research purposes. Persons interested in receiving a free copy of the software should contact Dr. F. A. Leone. Copyright © 2005 International Union of Biochemistry and Molecular Biology, Inc.

  1. Large calculation of the flow over a hypersonic vehicle using a GPU

    NASA Astrophysics Data System (ADS)

    Elsen, Erich; LeGresley, Patrick; Darve, Eric

    2008-12-01

    Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.

  2. Locating structures and evolution pathways of reconstructed rutile TiO2(011) using genetic algorithm aided density functional theory calculations.

    PubMed

    Ding, Pan; Gong, Xue-Qing

    2016-05-01

    Titanium dioxide (TiO2) is an important metal oxide that has been used in many different applications. TiO2 has also been widely employed as a model system to study basic processes and reactions in surface chemistry and heterogeneous catalysis. In this work, we investigated the (011) surface of rutile TiO2 by focusing on its reconstruction. Density functional theory calculations aided by a genetic algorithm based optimization scheme were performed to extensively sample the potential energy surfaces of reconstructed rutile TiO2 structures that obey (2 × 1) periodicity. A lot of stable surface configurations were located, including the global-minimum configuration that was proposed previously. The wide variety of surface structures determined through the calculations performed in this work provide insight into the relationship between the atomic configuration of a surface and its stability. More importantly, several analytical schemes were proposed and tested to gauge the differences and similarities among various surface structures, aiding the construction of the complete pathway for the reconstruction process.

  3. Cognitively-Related Basic Activities of Daily Living Impairment Greatly Increases the Risk of Death in Alzheimers Disease.

    PubMed

    Liang, Fu-Wen; Chan, Wenyaw; Chen, Ping-Jen; Zimmerman, Carissa; Waring, Stephen; Doody, Rachelle

    2016-01-01

    Some Alzheimer's disease (AD) patients die without ever developing cognitively impaired basic activities of daily living (basic ADL), which may reflect slower disease progression or better compensatory mechanisms. Although impaired basic ADL is related to disease severity, it may exert an independent risk for death. This study examined the association between impaired basic ADL and survival of AD patients, and proposed a multistate approach for modeling the time to death for patients who demonstrate different patterns of progression of AD that do or do not include basic ADL impairment. 1029 patients with probable AD at the Baylor College of Medicine Alzheimer's Disease and Memory Disorders Center met the criteria for this study. Two complementary definitions were used to define development of basic ADL impairment using the Physical Self-Maintenance Scale score. A weighted Cox regression model, including a time-dependent covariate (development of basic ADL impairment), and a multistate survival model were applied to examine the effect of basic ADL impairment on survival. As expected decreased ability to perform basic ADL at baseline, age at initial visit, years of education, and sex were all associated with significantly higher mortality risk. In those unimpaired at baseline, the development of basic ADL impairment was also associated with a much greater risk of death (hazard ratios 1.77-4.06) over and above the risk conferred by loss of MMSE points. A multi-state Cox model, controlling for those other variables quantified the substantive increase in hazard ratios for death conferred by the development of basic ADL impairment by two definitions and can be applied to calculate the short term risk of mortality in individual patients. The current study demonstrates that the presence of basic ADL impairment or the development of such impairments are important predictors of death in AD patients, regardless of severity.

  4. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  5. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  6. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  7. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  8. PITCH-ANGLE SCATTERING: RESONANCE VERSUS NONRESONANCE, A BASIC TEST OF THE QUASILINEAR DIFFUSIVE RESULT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ragot, B. R.

    2012-01-01

    Due to the very broad range of the scales available for the development of turbulence in space and astrophysical plasmas, the energy at the resonant scales of wave-particle interaction often constitutes only a tiny fraction of the total magnetic turbulent energy. Despite the high efficiency of resonant wave-particle interaction, one may therefore question whether resonant interaction really is the determining interaction process between particles and turbulent fields. By evaluating and comparing resonant and nonresonant effects in the frame of a quasilinear calculation, the dominance of resonance is here put to the test. By doing so, a basic test of themore » classical resonant quasilinear diffusive result for the pitch-angle scattering of charged energetic particles is also performed.« less

  9. Comorbidity of Arithmetic and Reading Disorder: Basic Number Processing and Calculation in Children with Learning Impairments

    ERIC Educational Resources Information Center

    Raddatz, Julia; Kuhn, Jörg-Tobias; Holling, Heinz; Moll, Kristina; Dobel, Christian

    2017-01-01

    The aim of the present study was to investigate the cognitive profiles of primary school children (age 82-133 months) on a battery of basic number processing and calculation tasks. The sample consisted of four groups matched for age and IQ: arithmetic disorder only (AD; n = 20), reading disorder only (RD; n = 40), a comorbid group (n = 27), and an…

  10. Do different types of school mathematics development depend on different constellations of numerical versus general cognitive abilities?

    PubMed

    Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher

    2010-11-01

    The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.

  11. The flight of a balsa glider

    NASA Astrophysics Data System (ADS)

    Waltham, Chris

    1999-07-01

    A simple analysis is performed on the flight of a small balsa toy glider. All the basic features of flight have to be included in the calculation. Key differences between the flight of small objects like the glider, and full-sized aircraft, are examined. Good agreement with experimental data is obtained when only one parameter, the drag coefficient, is allowed to vary. The experimental drag coefficient is found to be within a factor of 2 of that obtained using the theory of ideal flat plates.

  12. Fast Fourier Tranformation Algorithms: Experiments with Microcomputers.

    DTIC Science & Technology

    1986-07-01

    is, functions a with a known, discrete Fourier transform A Such functions are given fn [I]. The functions, TF1 , TF2, and TF3, were used and are...the IBM PC, all with TF1 (Eq. 1). ’The compilers provided options to improve performance, as noted, for which a penalty in compiling time has to be...BASIC only. Series I In this series the procedures were as follows: (i) Calculate the input values for TF1 of ar and the modulus Iar (which is

  13. Soldier 2020 Injury Rates/Attrition Rates Working Group Medical Recommendations

    DTIC Science & Technology

    2015-06-24

    upon arrival and are currently being provided a multivitamin in Basic Military Training. Female Specific Issues Pregnancy ...Approximately 5% of female Soldiers are pregnant at any given time. • This calculates to ~0.75% of the total force not available due to pregnancy and postpartum...performance Pregnancy affects approximately 0.75% of the total Army force at any given time M E D I C A L R E A D I N E S S Select SLIDE MASTER to Insert

  14. Axial pico turbine - construction and experimental research

    NASA Astrophysics Data System (ADS)

    Peczkis, G.; Goryca, Z.; Korczak, A.

    2017-08-01

    The paper concerns axial water turbine of power equal to 1 kW. The example of axial water turbine constructional calculations was provided, as well as turbine rotor construction with NACA profile blades. The laboratory test rig designed and built to perform measurements on pico turbine was described. The turbine drove three-phase electrical generator. On the basis of highest efficiency parameters, pico turbine basic characteristics were elaborated. The experimental research results indicated that pico turbine can achieve maximum efficiency close to the values of larger water turbines.

  15. Development of methods for calculating basic features of the nuclear contribution to single event upsets under the effect of protons of moderately high energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chechenin, N. G., E-mail: chechenin@sinp.msu.ru; Chuvilskaya, T. V.; Shirokova, A. A.

    2015-10-15

    As a continuation and a development of previous studies of our group that were devoted to the investigation of nuclear reactions induced by protons of moderately high energy (between 10 and 400 MeV) in silicon, aluminum, and tungsten atoms, the results obtained by exploring nuclear reactions on atoms of copper, which is among the most important components in materials for contact pads and pathways in modern and future ultralarge-scale integration circuits, especially in three-dimensional topology, are reported in the present article. The nuclear reactions in question lead to the formation of the mass and charge spectra of recoil nuclei rangingmore » fromheavy target nuclei down to helium and hydrogen. The kineticenergy spectra of reaction products are calculated. The results of the calculations based on the procedure developed by our group are compared with the results of calculations and experiments performed by other authors.« less

  16. Data acquisition and real-time control using spreadsheets: interfacing Excel with external hardware.

    PubMed

    Aliane, Nourdine

    2010-07-01

    Spreadsheets have become a popular computational tool and a powerful platform for performing engineering calculations. Moreover, spreadsheets include a macro language, which permits the inclusion of standard computer code in worksheets, and thereby enable developers to greatly extend spreadsheets' capabilities by designing specific add-ins. This paper describes how to use Excel spreadsheets in conjunction to Visual Basic for Application programming language to perform data acquisition and real-time control. Afterwards, the paper presents two Excel applications with interactive user interfaces developed for laboratory demonstrations and experiments in an introductory course in control. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  17. [Tracking study to improve basic academic ability in chemistry for freshmen].

    PubMed

    Sato, Atsuko; Morone, Mieko; Azuma, Yutaka

    2010-08-01

    The aims of this study were to assess the basic academic ability of freshmen with regard to chemistry and implement suitable educational guidance measures. At Tohoku Pharmaceutical University, basic academic ability examinations are conducted in chemistry for freshmen immediately after entrance into the college. From 2003 to 2009, the examination was conducted using the same questions, and the secular changes in the mean percentage of correct response were statistically analyzed. An experience survey was also conducted on 2007 and 2009 freshmen regarding chemical experiments at senior high school. Analysis of the basic academic ability examinations revealed a significant decrease in the mean percentage of correct responses after 2007. With regard to the answers for each question, there was a significant decrease in the percentage of correct answers for approximately 80% of questions. In particular, a marked decrease was observed for calculation questions involving percentages. A significant decrease was also observed in the number of students who had experiences with chemical experiments in high school. However, notable results have been achieved through the implementation of practice incorporating calculation problems in order to improve calculation ability. Learning of chemistry and a lack of experimental experience in high school may be contributory factors in the decrease in chemistry academic ability. In consideration of the professional ability demanded of pharmacists, the decrease in calculation ability should be regarded as a serious issue and suitable measures for improving calculation ability are urgently required.

  18. Generation of a dynamo magnetic field in a protoplanetary accretion disk

    NASA Technical Reports Server (NTRS)

    Stepinski, T.; Levy, E. H.

    1987-01-01

    A new computational technique is developed that allows realistic calculations of dynamo magnetic field generation in disk geometries corresponding to protoplanetary and protostellar accretion disks. The approach is of sufficient generality to allow, in the future, a wide class of accretion disk problems to be solved. Here, basic modes of a disk dynamo are calculated. Spatially localized oscillatory states are found to occur in Keplerain disks. A physical interpretation is given that argues that spatially localized fields of the type found in these calculations constitute the basic modes of a Keplerian disk dynamo.

  19. Simulation-Based Assessment Identifies Longitudinal Changes in Cognitive Skills in an Anesthesiology Residency Training Program.

    PubMed

    Sidi, Avner; Gravenstein, Nikolaus; Vasilopoulos, Terrie; Lampotang, Samsun

    2017-06-02

    We describe observed improvements in nontechnical or "higher-order" deficiencies and cognitive performance skills in an anesthesia residency cohort for a 1-year time interval. Our main objectives were to evaluate higher-order, cognitive performance and to demonstrate that simulation can effectively serve as an assessment of cognitive skills and can help detect "higher-order" deficiencies, which are not as well identified through more traditional assessment tools. We hypothesized that simulation can identify longitudinal changes in cognitive skills and that cognitive performance deficiencies can then be remediated over time. We used 50 scenarios evaluating 35 residents during 2 subsequent years, and 18 of those 35 residents were evaluated in both years (post graduate years 3 then 4) in the same or similar scenarios. Individual basic knowledge and cognitive performance during simulation-based scenarios were assessed using a 20- to 27-item scenario-specific checklist. Items were labeled as basic knowledge/technical (lower-order cognition) or advanced cognitive/nontechnical (higher-order cognition). Identical or similar scenarios were repeated annually by a subset of 18 residents during 2 successive academic years. For every scenario and item, we calculated group error scenario rate (frequency) and individual (resident) item success. Grouped individuals' success rates are calculated as mean (SD), and item success grade and group error rates are calculated and presented as proportions. For all analyses, α level is 0.05. Overall PGY4 residents' error rates were lower and success rates higher for the cognitive items compared with technical item performance in the operating room and resuscitation domains. In all 3 clinical domains, the cognitive error rate by PGY4 residents was fairly low (0.00-0.22) and the cognitive success rate by PGY4 residents was high (0.83-1.00) and significantly better compared with previous annual assessments (P < 0.05). Overall, there was an annual decrease in error rates for 2 years, primarily driven by decreases in cognitive errors. The most commonly observed cognitive error types remained anchoring, availability bias, premature closure, and confirmation bias. Simulation-based assessments can highlight cognitive performance areas of relative strength, weakness, and progress in a resident or resident cohort. We believe that they can therefore be used to inform curriculum development including activities that require higher-level cognitive processing.

  20. Tetrahedral cluster and pseudo molecule: New approaches to Calculate Absolute Surface Energy of Zinc Blende (111)/(-1-1-1) Surface

    NASA Astrophysics Data System (ADS)

    Zhang, Yiou; Zhang, Jingzhao; Tse, Kinfai; Wong, Lun; Chan, Chunkai; Deng, Bei; Zhu, Junyi

    Determining accurate absolute surface energies for polar surfaces of semiconductors has been a great challenge in decades. Here, we propose pseudo-hydrogen passivation to calculate them, using density functional theory approaches. By calculating the energy contribution from pseudo-hydrogen using either a pseudo molecule method or a tetrahedral cluster method, we obtained (111)/(-1-1-1) surfaces energies of Si, GaP, GaAs, and ZnS with high self-consistency. Our findings may greatly enhance the basic understandings of different surfaces and lead to novel strategies in the crystal growth. We would like to thank Su-huai Wei for helpful discussions. Computing resources were provided by the High Performance Cluster Computing Centre, Hong Kong Baptist University. This work was supported by the start-up funding and direct Grant with the Project.

  1. Experimental and numerical investigation of development of disturbances in the boundary layer on sharp and blunted cone

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Bountin, D. A.; Gromyko, Yu. V.; Khotyanovsky, D. V.; Kudryavtsev, A. N.

    2016-10-01

    Development of disturbances in the supersonic boundary layer on sharp and blunted cones is studied both experimentally and theoretically. The experiments were conducted at the Transit-M hypersonic wind tunnel of the Institute of Theoretical and Applied Mechanics. Linear stability calculations use the basic flow profiles provided by the numerical simulations performed by solving the Navier-Stokes equations with the ANSYS Fluent and the in-house CFS3D code. Both the global pseudospectral Chebyshev method and the local iteration procedure are employed to solve the eigenvalue problem and determine linear stability characteristics. The calculated amplification factors for disturbances of various frequencies are compared with the experimentally measured pressure fluctuation spectra at different streamwise positions. It is shown that the linear stability calculations predict quite accurately the frequency of the most amplified disturbances and enable us to estimate reasonably well their relative amplitudes.

  2. Graviton multipoint amplitudes for higher-derivative gravity in anti-de Sitter space

    NASA Astrophysics Data System (ADS)

    Shawa, M. M. W.; Medved, A. J. M.

    2018-04-01

    We calculate graviton multipoint amplitudes in an anti-de Sitter black brane background for higher-derivative gravity of arbitrary order in numbers of derivatives. The calculations are performed using tensor graviton modes in a particular regime of comparatively high energies and large scattering angles. The regime simplifies the calculations but, at the same time, is well suited for translating these results into the language of the dually related gauge theory. After considering theories whose Lagrangians consist of contractions of up to four Riemann tensors, we generalize to even higher-derivative theories by constructing a "basis" for the relevant scattering amplitudes. This construction enables one to find the basic form of the n -point amplitude for arbitrary n and any number of derivatives. Additionally, using the four-point amplitudes for theories whose Lagrangians carry contractions of either three or four Riemann tensors, we reexpress the scattering properties in terms of the Mandelstam variables.

  3. Fluidized bed combustor modeling

    NASA Technical Reports Server (NTRS)

    Horio, M.; Rengarajan, P.; Krishnan, R.; Wen, C. Y.

    1977-01-01

    A general mathematical model for the prediction of performance of a fluidized bed coal combustor (FBC) is developed. The basic elements of the model consist of: (1) hydrodynamics of gas and solids in the combustor; (2) description of gas and solids contacting pattern; (3) kinetics of combustion; and (4) absorption of SO2 by limestone in the bed. The model is capable of calculating the combustion efficiency, axial bed temperature profile, carbon hold-up in the bed, oxygen and SO2 concentrations in the bubble and emulsion phases, sulfur retention efficiency and particulate carry over by elutriation. The effects of bed geometry, excess air, location of heat transfer coils in the bed, calcium to sulfur ratio in the feeds, etc. are examined. The calculated results are compared with experimental data. Agreement between the calculated results and the observed data are satisfactory in most cases. Recommendations to enhance the accuracy of prediction of the model are suggested.

  4. A side-by-side comparison of CPV module and system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew; Marion, Bill; Kurtz, Sarah

    A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less

  5. Materials Data on BaSiC (SG:107) by Materials Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kristin Persson

    2016-09-24

    Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations

  6. [Biometric bases: basic concepts of probability calculation].

    PubMed

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  7. Thermodynamic analysis of onset characteristics in a miniature thermoacoustic Stirling engine

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Zhou, Gang; Li, Qing

    2013-06-01

    This paper analyzes the onset characteristics of a miniature thermoacoustic Stirling heat engine using the thermodynamic analysis method. The governing equations of components are reduced from the basic thermodynamic relations and the linear thermoacoustic theory. By solving the governing equation group numerically, the oscillation frequencies and onset temperatures are obtained. The dependences of the kinds of working gas, the length of resonator tube, the diameter of resonator tube, on the oscillation frequency are calculated. Meanwhile, the influences of hydraulic radius and mean pressure on the onset temperature for different working gas are also presented. The calculation results indicate that there exists an optimal dimensionless hydraulic radius to obtain the lowest onset temperature, whose value lies in the range of 0.30-0.35 for different working gases. Furthermore, the amplitude and phase relationship of pressures and volume flows are analyzed in the time-domain. Some experiments have been performed to validate the calculations. The calculation results agree well with the experimental values. Finally, an error analysis is made, giving the reasons that cause the errors of theoretical calculations.

  8. FDTD calculations of SAR for child voxel models in different postures between 10 MHz and 3 GHz.

    PubMed

    Findlay, R P; Lee, A-K; Dimbylow, P J

    2009-08-01

    Calculations of specific energy absorption rate (SAR) have been performed on the rescaled NORMAN 7-y-old voxel model and the Electronics and Telecommunications Research Institute (ETRI) child 7-y-old voxel model in the standing arms down, arms up and sitting postures. These calculations were for plane-wave exposure under isolated and grounded conditions between 10 MHz and 3 GHz. It was found that there was little difference at each resonant frequency between the whole-body averaged SAR values calculated for the NORMAN and ETRI 7-y-old models for each of the postures studied. However, when compared with the arms down posture, raising the arms increased the SAR by up to 25%. Electric field values required to produce the International Commission on Non-Ionizing Radiation Protection and Institute of Electrical and Electronic Engineers public basic restriction were calculated, and compared with reference levels for the different child models and postures. These showed that, under certain worst-case exposure conditions, the reference levels may not be conservative.

  9. A Reactor Development Scenario for the FuZE Sheared-Flow Stabilized Z-pinch

    NASA Astrophysics Data System (ADS)

    McLean, Harry S.; Higginson, D. P.; Schmidt, A.; Tummel, K. K.; Shumlak, U.; Nelson, B. A.; Claveau, E. L.; Forbes, E. G.; Golingo, R. P.; Stepanov, A. D.; Weber, T. R.; Zhang, Y.

    2017-10-01

    We present a conceptual design, scaling calculations, and development path for a pulsed fusion reactor based on a flow-stabilized Z-pinch. Experiments performed on the ZaP and ZaP-HD devices have largely demonstrated the basic physics of sheared-flow stabilization at pinch currents up to 100 kA. Initial experiments on the FuZE device, a high-power upgrade of ZaP, have achieved 20 usec of stability at pinch current 100-200 kA and pinch diameter few mm for a pinch length of 50 cm. Scaling calculations based on a quasi-steady-state power balance show that extending stable duration to 100 usec at a pinch current of 1.5 MA and pinch length of 50 cm, results in a reactor plant Q 5. Future performance milestones are proposed for pinch currents of: 300 kA, where Te and Ti are calculated to exceed 1-2 keV; 700 kA, where DT fusion power would be expected to exceed pinch input power; and 1 MA, where fusion energy per pulse exceeds input energy per pulse. This work funded by USDOE ARPA-E and performed under the auspices of Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734770.

  10. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  11. Development of Advanced Carbon Face Seals for Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Falaleev, S. V.; Bondarchuk, P. V.; Tisarev, A. Yu

    2018-01-01

    Modern aircraft gas turbine engines require the development of seals which can operate for a long time with low leakages. The basic type of seals applied for gas turbine engine rotor supports is face seal. To meet the modern requirements of reliability, leak-tightness and weight, low-leakage gas-static and hydrodynamic seals have to be developed. Dry gas seals use both gas-static and hydrodynamic principles. In dry gas seals microgrooves are often used, which ensure the reverse injection of leakages in the sealed cavity. Authors have developed a calculation technique including the concept of coupled hydrodynamic, thermal and structural calculations. This technique allows to calculate the seal performance taking into account the forces of inertia, rupture of the lubricant layer and the real form of the gap. Authors have compared the efficiency of seals with different forms of microgrooves. Results of calculations show that seal with rectangular form of microgrooves has a little gap leading to both the contact of seal surfaces and the wear. Reversible microgrooves have a higher oil mass flow rate, whereas HST micro-grooves have good performance, but they are difficult to produce. Spiral microgrooves have both an acceptable leakages and a high stiffness of liquid layer that is important in terms of ensuring of sealing performance at vibration conditions. Therefore, the spiral grooves were chosen for the developed seal. Based on calculation results, geometric dimensions were chosen to ensure the reliability of the seal operation by creating a guaranteed liquid film, which eliminates the wear of the sealing surfaces. Seals designed were tested both at the test rig and in the engine.

  12. Correlation between substratum roughness and wettability, cell adhesion, and cell migration.

    PubMed

    Lampin, M; Warocquier-Clérout; Legris, C; Degrange, M; Sigot-Luizard, M F

    1997-07-01

    Cell adhesion and spreading of chick embryo vascular and corneal explants grown on rough and smooth poly (methyl methacrylate) (PMMA) were analyzed to test the cell response specificity to substratum surface properties. Different degrees of roughness were obtained by sand-blasting PMMA with alumina grains. Hydrophilic and hydrophobic components of the surface free energy (SFE) were calculated according to Good-van Oss's model. Contact angles were determined using a computerized angle meter. The apolar component of the SFE gamma s(LW), increased with a slight roughness whereas the basic component, gamma s-, decreased. The acido-basic properties disappeared as roughness increased. Incubation of PMMA in culture medium, performed to test the influence if the biological environment, allowed surface adsorption of medium proteins which annihilated roughness effect and restored hydrophilic properties. An organotypic culture assay was carried out in an attempt to relate the biocompatibility to substratum surface state. Cell migration was calculated from the area of cell layer. Cellular adhesion was determined by measuring the kinetic of release of enzymatically dissociated cells. A slight roughness raised the migration are to an upper extent no matter which cell type. Enhancement of the cell adhesion potential was related to the degree of roughness and the hydrophobicity.

  13. A Simple Geometrical Model for Calculation of the Effective Emissivity in Blackbody Cylindrical Cavities

    NASA Astrophysics Data System (ADS)

    De Lucas, Javier

    2015-03-01

    A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.

  14. Commissioning and validation of COMPASS system for VMAT patient specific quality assurance

    NASA Astrophysics Data System (ADS)

    Pimthong, J.; Kakanaporn, C.; Tuntipumiamorn, L.; Laojunun, P.; Iampongpaiboon, P.

    2016-03-01

    Pre-treatment patient specific quality assurance (QA) of advanced treatment techniques such as volumetric modulated arc therapy (VMAT) is one of important QA in radiotherapy. The fast and reliable dosimetric device is required. The objective of this study is to commission and validate the performance of COMPASS system for dose verification of VMAT technique. The COMPASS system is composed of an array of ionization detectors (MatriXX) mounted to the gantry using a custom holder and software for the analysis and visualization of QA results. We validated the COMPASS software for basic and advanced clinical application. For the basic clinical study, the simple open field in various field sizes were validated in homogeneous phantom. And the advanced clinical application, the fifteen prostate and fifteen nasopharyngeal cancers VMAT plans were chosen to study. The treatment plans were measured by the MatriXX. The doses and dose-volume histograms (DVHs) reconstructed from the fluence measurements were compared to the TPS calculated plans. And also, the doses and DVHs computed using collapsed cone convolution (CCC) Algorithm were compared with Eclipse TPS calculated plans using Analytical Anisotropic Algorithm (AAA) that according to dose specified in ICRU 83 for PTV.

  15. Kinematic analysis of basic rhythmic movements of hip-hop dance: motion characteristics common to expert dancers.

    PubMed

    Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo

    2015-02-01

    In hip-hop dance contests, a procedure for evaluating performances has not been clearly defined, and objective criteria for evaluation are necessary. It is assumed that most hip-hop dance techniques have common motion characteristics by which judges determine the dancer's skill level. This study aimed to extract motion characteristics that may be linked to higher evaluations by judges. Ten expert and 12 nonexpert dancers performed basic rhythmic movements at a rate of 100 beats per minute. Their movements were captured using a motion capture system, and eight judges evaluated the performances. Four kinematic parameters, including the amplitude of the body motions and the phase delay, which indicates the phase difference between two joint angles, were calculated. The two groups showed no significant differences in terms of the amplitudes of the body motions. In contrast, the phase delay between the head motion and the other body parts' motions of expert dancers who received higher scores from the judges, which was approximately a quarter cycle, produced a loop-shaped motion of the head. It is suggested that this slight phase delay was related to the judges' evaluations and that these findings may help in constructing an objective evaluation system.

  16. A guided enquiry approach to introduce basic concepts concerning magnetic hysteresis to minimize student misconceptions

    NASA Astrophysics Data System (ADS)

    Wei, Yajun; Zhai, Zhaohui; Gunnarsson, Klas; Svedlindh, Peter

    2014-11-01

    Basic concepts concerning magnetic hysteresis are of vital importance in understanding magnetic materials. However, these concepts are often misinterpreted by many students and even textbooks. We summarize the most common misconceptions and present a new approach to help clarify these misconceptions and enhance students’ understanding of the hysteresis loop. In this approach, students are required to perform an experiment and plot the measured magnetization values and thereby calculated demagnetizing field, internal field, and magnetic induction as functions of the applied field point by point on the same graph. The concepts of the various coercivity, remanence, saturation magnetization, and saturation induction will not be introduced until this stage. By plotting this graph, students are able to interlink all the preceding concepts and intuitively visualize the underlying physical relations between them.

  17. Determination of thermal physical properties of alkali fluoride/carbonate eutectic molten salt

    NASA Astrophysics Data System (ADS)

    An, Xue-Hui; Cheng, Jin-Hui; Su, Tao; Zhang, Peng

    2017-06-01

    Molten salts used in high temperatures are more and more interested in the CSP for higher energy conversion efficiency. Thermal physical properties are the basic engineering data of thermal hydraulic calculation and safety analysis. Therefore, the thermophysical performances involving density, specific heat capacity, viscosity and thermal conductivity of FLiNaK, (LiNaK)2CO3 and LiF(NaK)2CO3 molten salts are experimentally determined and through comparison the general rules can be summarized. Density measurement was performed on the basis of Archimedes theory; specific heat capacity was measured using the DSC technique; viscosity was tested based on the rotating method; and the thermal conductivity was gained by laser flash method with combination of the density, specific heat capacity and thermal diffusivity through a formula. Finally, the energy storage capacity and figures of merit are calculated to evaluate their feasibility as TES and HFT media. The results show that FLiNaK has the largest energy storage capacity and best heat transfer performance, LiF(NaK)2CO3 is secondary, and (LiNaK)2CO3 has the smallest.

  18. Using a Thyroid Case Study and Error Plausibility to Introduce Basic Lab Skills

    ERIC Educational Resources Information Center

    Browning, Samantha; Urschler, Margaret; Meidl, Katherine; Peculis, Brenda; Milanick, Mark

    2017-01-01

    We describe a 3-hour session that provides students with the opportunity to review basic lab concepts and important techniques using real life scenarios. We began with two separate student-engaged discussions to remind/reinforce some basic concepts in physiology and review calculations with respect to chemical compounds. This was followed by…

  19. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  20. The Predictive Power of Electronic Polarizability for Tailoring the Refractivity of High Index Glasses Optical Basicity Versus the Single Oscillator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCloy, John S.; Riley, Brian J.; Johnson, Bradley R.

    Four compositions of high density (~8 g/cm3) heavy metal oxide glasses composed of PbO, Bi2O3, and Ga2O3 were produced and refractivity parameters (refractive index and density) were computed and measured. Optical basicity was computed using three different models – average electronegativity, ionic-covalent parameter, and energy gap – and the basicity results were used to compute oxygen polarizability and subsequently refractive index. Refractive indices were measured in the visible and infrared at 0.633 μm, 1.55 μm, 3.39 μm, 5.35 μm, 9.29 μm, and 10.59 μm using a unique prism coupler setup, and data were fitted to the Sellmeier expression to obtainmore » an equation of the dispersion of refractive index with wavelength. Using this dispersion relation, single oscillator energy, dispersion energy, and lattice energy were determined. Oscillator parameters were also calculated for the various glasses from their oxide values as an additional means of predicting index. Calculated dispersion parameters from oxides underestimate the index by 3 to 4%. Predicted glass index from optical basicity, based on component oxide energy gaps, underpredicts the index at 0.633 μm by only 2%, while other basicity scales are less accurate. The predicted energy gap of the glasses based on this optical basicity overpredicts the Tauc optical gap as determined by transmission measurements by 6 to 10%. These results show that for this system, density, refractive index in the visible, and energy gap can be reasonably predicted using only composition, optical basicity values for the constituent oxides, and partial molar volume coefficients. Calculations such as these are useful for a priori prediction of optical properties of glasses.« less

  1. Safety Criticality Standards Using the French CRISTAL Code Package: Application to the AREVA NP UO{sub 2} Fuel Fabrication Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doucet, M.; Durant Terrasson, L.; Mouton, J.

    2006-07-01

    Criticality safety evaluations implement requirements to proof of sufficient sub critical margins outside of the reactor environment for example in fuel fabrication plants. Basic criticality data (i.e., criticality standards) are used in the determination of sub critical margins for all processes involving plutonium or enriched uranium. There are several criticality international standards, e.g., ARH-600, which is one the US nuclear industry relies on. The French Nuclear Safety Authority (DGSNR and its advising body IRSN) has requested AREVA NP to review the criticality standards used for the evaluation of its Low Enriched Uranium fuel fabrication plants with CRISTAL V0, the recentlymore » updated French criticality evaluation package. Criticality safety is a concern for every phase of the fabrication process including UF{sub 6} cylinder storage, UF{sub 6}-UO{sub 2} conversion, powder storage, pelletizing, rod loading, assembly fabrication, and assembly transportation. Until 2003, the accepted criticality standards were based on the French CEA work performed in the late seventies with the APOLLO1 cell/assembly computer code. APOLLO1 is a spectral code, used for evaluating the basic characteristics of fuel assemblies for reactor physics applications, which has been enhanced to perform criticality safety calculations. Throughout the years, CRISTAL, starting with APOLLO1 and MORET 3 (a 3D Monte Carlo code), has been improved to account for the growth of its qualification database and for increasing user requirements. Today, CRISTAL V0 is an up-to-date computational tool incorporating a modern basic microscopic cross section set based on JEF2.2 and the comprehensive APOLLO2 and MORET 4 codes. APOLLO2 is well suited for criticality standards calculations as it includes a sophisticated self shielding approach, a P{sub ij} flux determination, and a 1D transport (S{sub n}) process. CRISTAL V0 is the result of more than five years of development work focusing on theoretical approaches and the implementation of user-friendly graphical interfaces. Due to its comprehensive physical simulation and thanks to its broad qualification database with more than a thousand benchmark/calculation comparisons, CRISTAL V0 provides outstanding and reliable accuracy for criticality evaluations for configurations covering the entire fuel cycle (i.e. from enrichment, pellet/assembly fabrication, transportation, to fuel reprocessing). After a brief description of the calculation scheme and the physics algorithms used in this code package, results for the various fissile media encountered in a UO{sub 2} fuel fabrication plant will be detailed and discussed. (authors)« less

  2. A new three-dimensional manufacturing service composition method under various structures using improved Flower Pollination Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wenyu; Yang, Yushu; Zhang, Shuai; Yu, Dejian; Chen, Yong

    2018-05-01

    With the growing complexity of customer requirements and the increasing scale of manufacturing services, how to select and combine the single services to meet the complex demand of the customer has become a growing concern. This paper presents a new manufacturing service composition method to solve the multi-objective optimization problem based on quality of service (QoS). The proposed model not only presents different methods for calculating the transportation time and transportation cost under various structures but also solves the three-dimensional composition optimization problem, including service aggregation, service selection, and service scheduling simultaneously. Further, an improved Flower Pollination Algorithm (IFPA) is proposed to solve the three-dimensional composition optimization problem using a matrix-based representation scheme. The mutation operator and crossover operator of the Differential Evolution (DE) algorithm are also used to extend the basic Flower Pollination Algorithm (FPA) to improve its performance. Compared to Genetic Algorithm, DE, and basic FPA, the experimental results confirm that the proposed method demonstrates superior performance than other meta heuristic algorithms and can obtain better manufacturing service composition solutions.

  3. A generic testbed for the design of plasma spectrometer control software with application to the THOR-CSW solar wind instrument

    NASA Astrophysics Data System (ADS)

    De Keyser, Johan; Lavraud, Benoit; Neefs, Eddy; Berkenbosch, Sophie; Beeckman, Bram; Maggiolo, Romain; Gamby, Emmanuel; Fedorov, Andrei; Baruah, Rituparna; Wong, King-Wah; Amoros, Carine; Mathon, Romain; Génot, Vincent; Marcucci, Federica; Brienza, Daniele

    2017-04-01

    Modern plasma spectrometers require intelligent software that is able to exploit their capabilities to the fullest. While the low-level control of the instrument and basic tasks such as performing the basic measurement, temperature control, and production of housekeeping data are to be done by software that is executed on an FPGA and/or processor inside the instrument, higher level tasks such as control of measurement sequences, on-board moment calculation, beam tracking decisions, and data compression, may be performed by the instrument or in the payload data processing unit. Such design decisions, as well as an assessment of the workload on the different processing components, require early prototyping. We have developed a generic simulation testbed for the design of plasma spectrometer control software that allows an early evaluation of the level of resources that is needed at each level. Early prototyping can pinpoint bottlenecks in the design allowing timely remediation. We have applied this tool to the THOR Cold Solar Wind (CSW) plasma spectrometer. Some examples illustrating the usefulness of the tool are given.

  4. Theoretical and computational analyses of LNG evaporator

    NASA Astrophysics Data System (ADS)

    Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong

    2017-04-01

    Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.

  5. EMP on a NTS experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, J.; van Lint, V.; Sherwood, S.

    This report is a compilation of two previous sets of pretest calculations, references 1 and 2 and the grounding and shielding report, reference 3. The calculations performed in reference 1 were made for the baseline system, with the instrumentation trailers not isolated from ground, and wider ranges of ground conductivity were considered. This was used to develop the grounding and shielding plan included in the appendix. The final pretest calculations of reference 2 were performed for the modified system with isolated trailers, and with a better knowledge of the ground conductivity. The basic driving mechanism for currents in the modelmore » is the motion of Compton electrons, driven by gamma rays, in the air gaps and soil. Most of the Compton current is balanced by conduction current which returns directly along the path of the Compton electron, but a small fraction will return by circuitous paths involving current flow on conductors, including the uphole cables. The calculation of the currents is done in a two step process -- first the voltages in the ground near the conducting metallic structures is calculated without considering the presence of the structures. These are then used as open circuit drivers for an electrical model of the conductors which is obtained from loop integrals of Maxwell`s equations. The model which is used is a transmission line model, similar to those which have been used to calculate EMP currents on buried and overhead cables in other situations, including previous underground tests, although on much shorter distance and time scales, and with more controlled geometries. The behavior of air gaps between the conducting structure and the walls of the drift is calculated using an air chemistry model which determines the electron and ion densities and uses them to calculate the air conductivity across the gap.« less

  6. Mathemagical Computing: Order of Operations and New Software.

    ERIC Educational Resources Information Center

    Ecker, Michael W.

    1989-01-01

    Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)

  7. Easing The Calculation Of Bolt-Circle Coordinates

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1995-01-01

    Bolt Circle Calculation (BOLT-CALC) computer program used to reduce significant time consumed in manually computing trigonometry of rectangular Cartesian coordinates of holes in bolt circle as shown on blueprint or drawing. Eliminates risk of computational errors, particularly in cases involving many holes or in cases in which coordinates expressed to many significant digits. Program assists in many practical situations arising in machine shops. Written in BASIC. Also successfully compiled and implemented by use of Microsoft's QuickBasic v4.0.

  8. batman: BAsic Transit Model cAlculatioN in Python

    NASA Astrophysics Data System (ADS)

    Kreidberg, Laura

    2015-10-01

    batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.

  9. The role of language in mathematical development: evidence from children with specific language impairments.

    PubMed

    Donlan, Chris; Cowan, Richard; Newton, Elizabeth J; Lloyd, Delyth

    2007-04-01

    A sample (n=48) of eight-year-olds with specific language impairments is compared with age-matched (n=55) and language matched controls (n=55) on a range of tasks designed to test the interdependence of language and mathematical development. Performance across tasks varies substantially in the SLI group, showing profound deficits in production of the count word sequence and basic calculation and significant deficits in understanding of the place-value principle in Hindu-Arabic notation. Only in understanding of arithmetic principles does SLI performance approximate that of age-matched-controls, indicating that principled understanding can develop even where number sequence production and other aspects of number processing are severely compromised.

  10. Large Scale GW Calculations on the Cori System

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  11. On the prediction of swirling flowfields found in axisymmetric combustor geometries

    NASA Technical Reports Server (NTRS)

    Rhode, D. L.; Lilley, D. G.; Mclaughlin, D. K.

    1981-01-01

    The paper reports research restricted to steady turbulence flow in axisymmetric geometries under low speed and nonreacting conditions. Numerical computations are performed for a basic two-dimensional axisymmetrical flow field similar to that found in a conventional gas turbine combustor. Calculations include a stairstep boundary representation of the expansion flow, a conventional k-epsilon turbulence model and realistic accomodation of swirl effects. A preliminary evaluation of the accuracy of computed flowfields is accomplished by comparisons with flow visualizations using neutrally-buoyant helium-filled soap bubbles as tracer particles. Comparisons of calculated results show good agreement, and it is found that a problem in swirling flows is the accuracy with which the sizes and shapes of the recirculation zones may be predicted, which may be attributed to the quality of the turbulence model.

  12. Improved sensitivity by post-column chemical environment modification of CE-ESI-MS using a flow-through microvial interface.

    PubMed

    Risley, Jessica May; Chen, David Da Yong

    2017-06-01

    Post-column chemical environment modification can affect detection sensitivity and signal appearance when capillary electrophoresis is coupled through electrospray ionization to mass spectrometry (CE-ESI-MS). In this study, changes in the signal intensity and peak shape of N-Acetylneuraminic acid (Neu5Ac) were examined when the modifier solution used in a flow-through microvial interface for CE-ESI-MS was prepared using an acidic or basic background electrolyte (BGE) composition. The use of a basic modifier resulted in improved detection compared to the results obtained when an acidic modifier was used in negative ion mode. Increased sensitivity and more symmetrical peak shape were obtained. Using an acidic modifier, the LOD of Neu5Ac was 47.7 nM, whereas for a basic modifier, the LOD of Neu5Ac was 5.20 nM. The calculated asymmetry factor at 100 nM of Neu5Ac ranged from 0.71 to 1.5 when an acidic modifier was used, while the factor ranged from 1.0 to 1.1 when a basic modifier was used. Properly chosen post-column chemical modification can have a significant effect on the performance of the CE-MS system. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Gas-Phase Lithium Cation Basicity: Revisiting the High Basicity Range by Experiment and Theory

    NASA Astrophysics Data System (ADS)

    Mayeux, Charly; Burk, Peeter; Gal, Jean-Francois; Kaljurand, Ivari; Koppel, Ilmar; Leito, Ivo; Sikk, Lauri

    2014-09-01

    According to high level calculations, the upper part of the previously published FT-ICR lithium cation basicity (LiCB at 373 K) scale appeared to be biased by a systematic downward shift. The purpose of this work was to determine the source of this systematic difference. New experimental LiCB values at 373 K have been measured for 31 ligands by proton-transfer equilibrium techniques, ranging from tetrahydrofuran (137.2 kJ mol-1) to 1,2-dimethoxyethane (202.7 kJ mol-1). The relative basicities (ΔLiCB) were included in a single self-consistent ladder anchored to the absolute LiCB value of pyridine (146.7 kJ mol-1). This new LiCB scale exhibits a good agreement with theoretical values obtained at G2(MP2) level. By means of kinetic modeling, it was also shown that equilibrium measurements can be performed in spite of the formation of Li+ bound dimers. The key feature for achieving accurate equilibrium measurements is the ion trapping time. The potential causes of discrepancies between the new data and previous experimental measurements were analyzed. It was concluded that the disagreement essentially finds its origin in the estimation of temperature and the calibration of Cook's kinetic method.

  14. Student understanding of pH: "i don't know what the log actually is, i only know where the button is on my calculator".

    PubMed

    Watters, Dianne J; Watters, James J

    2006-07-01

    In foundation biochemistry and biological chemistry courses, a major problem area that has been identified is students' lack of understanding of pH, acids, bases, and buffers and their inability to apply their knowledge in solving acid/base problems. The aim of this study was to explore students' conceptions of pH and their ability to solve problems associated with the behavior of biological acids to understand the source of student difficulties. The responses given by most students are characteristic of an atomistic approach in which they pay no attention to the structure of the problem and concentrate only on juggling the elements together until they get a solution. Many students reported difficulty in understanding what the question was asking and were unable to interpret a simple graph showing the pH activity profile of an enzyme. The most startling finding was the lack of basic understanding of logarithms and the inability of all except one student to perform a simple calculation on logs without a calculator. This deficiency in high school mathematical skills severely hampered their understanding of pH. This study has highlighted a widespread deficiency in basic mathematical skills among first year undergraduates and a fragmented understanding of acids and bases. Implications for the way in which the concepts of pH and buffers are taught are discussed. Copyright © 2006 International Union of Biochemistry and Molecular Biology, Inc.

  15. Electrodynamics panel presentation

    NASA Technical Reports Server (NTRS)

    Mccoy, J.

    1986-01-01

    The Plasma Motor Generator (PMG) concept is explained in detail. The PMG tether systems being used to calculate the estimated performance data is described. The voltage drops and current contact geometries involved in the operation of an electrodynamic tether are displayed illustrating the comparative behavior of hollow cathodes, electron guns, and passive collectors for current coupling into the ionosphere. The basic PMG design involving the massive tether cable with little or no satellite mass at the far end(s) are also described. The Jupiter mission and its use of electrodynamic tethers are given. The need for demonstration experiments is stressed.

  16. Basic forest cover mapping using digitized remote sensor data and automated data processing techniques

    NASA Technical Reports Server (NTRS)

    Coggeshall, M. E.; Hoffer, R. M.

    1973-01-01

    Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.

  17. 47 CFR 76.611 - Cable television basic signal leakage performance criteria.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television basic signal leakage... television basic signal leakage performance criteria. (a) No cable television system shall commence or... one of the following cable television basic signal leakage performance criteria: (1) prior to carriage...

  18. Examining student performance in an introductory Physics for engineering course: A quantitative case study.

    NASA Astrophysics Data System (ADS)

    Valente, Diego; Savkar, Amit; Mokaya, Fridah; Wells, James

    The Force Concept Inventory (FCI) has been analyzed and studied in various ways with regards to students' understanding of basic physics concepts. We present normalized learning gains and effect size calculations of FCI scores, taken in the context of large-scale classes in a 4-year public university and course instruction that incorporates elements of Just-In-Time teaching and active learning components. In addition, we will present here a novel way of using FCI pre- and post-test as a predictor of students' performance on midterm and final exams. Utilizing a taxonomy table of physics concepts, we will look at student performance broken down by topic, while also examining possible correlations between FCI post-test scores and other course assessments. College of Liberal Arts and Sciences (CLAS), UConn.

  19. Calculations of critical misfit and thickness: An overview

    NASA Technical Reports Server (NTRS)

    Vandermerwe, Jan H.; Jesser, W. A.

    1988-01-01

    This overview stresses the equilibrium/nonequilibrium nature of the physical properties, as well as the basic properties of the models, used to calculate critical misfit and critical thickness in epitaxy.

  20. Revision workshops in elementary mathematics enhance student performance in routine laboratory calculations.

    PubMed

    Sawbridge, Jenny L; Qureshi, Haseeb K; Boyd, Matthew J; Brown, Angus M

    2014-09-01

    The ability to understand and implement calculations required for molarity and dilution computations that are routinely undertaken in the laboratory are essential skills that should be possessed by all students entering an undergraduate Life Sciences degree. However, it is increasingly recognized that the majority of these students are ill equipped to reliably carry out such calculations. There are several factors that conspire against students' understanding of this topic, with the alien concept of the mole in relation to the mass of compounds and the engineering notation required when expressing the relatively small quantities typically involved being two key examples. In this report, we highlight teaching methods delivered via revision workshops to undergraduate Life Sciences students at the University of Nottingham. Workshops were designed to 1) expose student deficiencies in basic numeracy skills and remedy these deficiencies, 2) introduce molarity and dilution calculations and illustrate their workings in a step-by-step manner, and 3) allow students to appreciate the magnitude of numbers. Preworkshop to postworkshop comparisons demonstrated a considerable improvement in students' performance, which attenuated with time. The findings of our study suggest that an ability to carry out laboratory calculations cannot be assumed in students entering Life Sciences degrees in the United Kingdom but that explicit instruction in the form of workshops improves proficiency to a level of competence that allows students to prosper in the laboratory environment. Copyright © 2014 The American Physiological Society.

  1. Learning Mathematics in a Visuospatial Format: A Randomized, Controlled Trial of Mental Abacus Instruction.

    PubMed

    Barner, David; Alvarez, George; Sullivan, Jessica; Brooks, Neon; Srinivasan, Mahesh; Frank, Michael C

    2016-07-01

    Mental abacus (MA) is a technique of performing fast, accurate arithmetic using a mental image of an abacus; experts exhibit astonishing calculation abilities. Over 3 years, 204 elementary school students (age range at outset: 5-7 years old) participated in a randomized, controlled trial to test whether MA expertise (a) can be acquired in standard classroom settings, (b) improves students' mathematical abilities (beyond standard math curricula), and (c) is related to changes in basic cognitive capacities like working memory. MA students outperformed controls on arithmetic tasks, suggesting that MA expertise can be achieved by children in standard classrooms. MA training did not alter basic cognitive abilities; instead, differences in spatial working memory at the beginning of the study mediated MA learning. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  2. The Calculator in the Classroom: Revolution or Revelation?

    ERIC Educational Resources Information Center

    Etlinger, Leonard E.; And Others

    The use of calculators and computers in the schools is promoted. It is stated that calculators should be used in the mathematics classroom as soon as basic operations are understood. A point is made that calculators are no greater a threat to "learning the fundamentals" than slide rules, which have been available for over 350 years. It is…

  3. Radiation Parameters of High Dose Rate Iridium -192 Sources

    NASA Astrophysics Data System (ADS)

    Podgorsak, Matthew B.

    A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.

  4. Calculations of the β-decay half-lives of neutron-deficient nuclei

    NASA Astrophysics Data System (ADS)

    Tan, Wenjin; Ni, Dongdong; Ren, Zhongzhou

    2017-05-01

    In this work, β+/EC decays of some medium-mass nuclei are investigated within the extended quasiparticle random-phase approximation (QRPA), where neutron-neutron, proton-proton and neutron-proton (np) pairing correlations are taken into consideration in the specialized Hartree-Fock-Bogoliubov (HFB) transformation. In addition to the pairing interaction, the Brückner G-matrix obtained with the charge-dependent Bonn nucleon-nucleon force is used for the residual particle-particle and particle-hole interactions. Calculations are performed for even-even proton-rich isotopes ranging from Z=24 to Z=34. It is found that the np pairing interaction plays a significant role in β-decay for some nuclei far from stability. Compared with other theoretical calculations, our calculations show good agreement with the available experimental data. Predictions of β-decay half-lives for some very neutron-deficient nuclei are made for reference. Supported by National Nature Science Foundation of China (11535004, 11375086, 11120101005, 11175085 and 11235001), 973 Nation Major State Basic Research and Development of China (2013CB834400) and Science and Technology Development Fund of Macau (020/2014/A1 and 039/2013/A2)

  5. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Lanthanide and actinide chemistry at high C/O ratios in the solar nebula

    NASA Technical Reports Server (NTRS)

    Lodders, Katharina; Fegley, Bruce, Jr.

    1993-01-01

    Chemical equilibrium calculations were performed to study the condensation chemistry of the REE and actinides under the highly reducing conditions which are necessary for the formation of the enstatite chondrites. Our calculations confirm that the REE and actinides condensed into oldhamite (CaS), the major REE and actinide host phase in enstatite chondrites, at a carbon-oxygen (C/O) ratio not less than 1 in an otherwise solar gas. Five basic types of REE abundance patterns, several of which are analogous to REE abundance patterns observed in the Ca, Al-rich inclusions in carbonaceous chondrites, are predicted to occur in meteoritic oldhamites. All of the reported REE patterns in oldhamites in enstatite chondrites can be interpreted in terms of our condensation calculations. The observed patterns fall into three of the five predicted categories. The reported Th and U enrichments and ratios in meteoritic oldhamites are also consistent with predictions of the condensation calculations. Pure REE sulfides are predicted to condense in the 10 exp -6 to 10 exp -9 bar range and may be found in enstatite chondrites if they formed in this pressure range.

  7. Calculation of Temperature Rise in Calorimetry.

    ERIC Educational Resources Information Center

    Canagaratna, Sebastian G.; Witt, Jerry

    1988-01-01

    Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)

  8. A combined study of heat and mass transfer in an infant incubator with an overhead screen.

    PubMed

    Ginalski, Maciej K; Nowak, Andrzej J; Wrobel, Luiz C

    2007-06-01

    The main objective of this study is to investigate the major physical processes taking place inside an infant incubator, before and after modifications have been made to its interior chamber. The modification involves the addition of an overhead screen to decrease radiation heat losses from the infant placed inside the incubator. The present study investigates the effect of these modifications on the convective heat flux from the infant's body to the surrounding environment inside the incubator. A combined analysis of airflow and heat transfer due to conduction, convection, radiation and evaporation has been performed, in order to calculate the temperature and velocity fields inside the incubator before and after the design modification. Due to the geometrical complexity of the model, computer-aided design (CAD) applications were used to generate a computer-based model. All numerical calculations have been performed using the commercial computational fluid dynamics (CFD) package FLUENT, together with in-house routines used for managing purposes and user-defined functions (UDFs) which extend the basic solver capabilities. Numerical calculations have been performed for three different air inlet temperatures: 32, 34 and 36 degrees C. The study shows a decrease of the radiative and convective heat losses when the overhead screen is present. The results obtained were numerically verified as well as compared with results available in the literature from investigations of dry heat losses from infant manikins.

  9. Calculating the n-point correlation function with general and efficient python code

    NASA Astrophysics Data System (ADS)

    Genier, Fred; Bellis, Matthew

    2018-01-01

    There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.

  10. First principles statistical mechanics of alloys and magnetism

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai

    Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.

  11. Handbook of Industrial Engineering Equations, Formulas, and Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badiru, Adedeji B; Omitaomu, Olufemi A

    The first handbook to focus exclusively on industrial engineering calculations with a correlation to applications, Handbook of Industrial Engineering Equations, Formulas, and Calculations contains a general collection of the mathematical equations often used in the practice of industrial engineering. Many books cover individual areas of engineering and some cover all areas, but none covers industrial engineering specifically, nor do they highlight topics such as project management, materials, and systems engineering from an integrated viewpoint. Written by acclaimed researchers and authors, this concise reference marries theory and practice, making it a versatile and flexible resource. Succinctly formatted for functionality, the bookmore » presents: Basic Math Calculations; Engineering Math Calculations; Production Engineering Calculations; Engineering Economics Calculations; Ergonomics Calculations; Facility Layout Calculations; Production Sequencing and Scheduling Calculations; Systems Engineering Calculations; Data Engineering Calculations; Project Engineering Calculations; and Simulation and Statistical Equations. It has been said that engineers make things while industrial engineers make things better. To make something better requires an understanding of its basic characteristics and the underlying equations and calculations that facilitate that understanding. To do this, however, you do not have to be computational experts; you just have to know where to get the computational resources that are needed. This book elucidates the underlying equations that facilitate the understanding required to improve design processes, continuously improving the answer to the age-old question: What is the best way to do a job?« less

  12. More Experiments and Calculations.

    ERIC Educational Resources Information Center

    Siddons, J. C.

    1984-01-01

    Describes two experiments that illustrate basic ideas but would be difficult to carry out. Also presents activities and experiments on rainbow cups, electrical charges, electrophorus calculation, pulse electrometer, a skidding car, and on the Oersted effect. (JN)

  13. MASPROP- MASS PROPERTIES OF A RIGID STRUCTURE

    NASA Technical Reports Server (NTRS)

    Hull, R. A.

    1994-01-01

    The computer program MASPROP was developed to rapidly calculate the mass properties of complex rigid structural systems. This program's basic premise is that complex systems can be adequately described by a combination of basic elementary structural shapes. Thirteen widely used basic structural shapes are available in this program. They are as follows: Discrete Mass, Cylinder, Truncated Cone, Torus, Beam (arbitrary cross section), Circular Rod (arbitrary cross section), Spherical Segment, Sphere, Hemisphere, Parallelepiped, Swept Trapezoidal Panel, Symmetric Trapezoidal Panels, and a Curved Rectangular Panel. MASPROP provides a designer with a simple technique that requires minimal input to calculate the mass properties of a complex rigid structure and should be useful in any situation where one needs to calculate the center of gravity and moments of inertia of a complex structure. Rigid body analysis is used to calculate mass properties. Mass properties are calculated about component axes that have been rotated to be parallel to the system coordinate axes. Then the system center of gravity is calculated and the mass properties are transferred to axes through the system center of gravity by using the parallel axis theorem. System weight, moments of inertia about the system origin, and the products of inertia about the system center of mass are calculated and printed. From the information about the system center of mass the principal axes of the system and the moments of inertia about them are calculated and printed. The only input required is simple geometric data describing the size and location of each element and the respective material density or weight of each element. This program is written in FORTRAN for execution on a CDC 6000 series computer with a central memory requirement of approximately 62K (octal) of 60 bit words. The development of this program was completed in 1978.

  14. BOXER: Fine-flux Cross Section Condensation, 2D Few Group Diffusion and Transport Burnup Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2010-02-01

    Neutron transport, calculation of multiplication factor and neutron fluxes in 2-D configurations: cell calculations, 2-D diffusion and transport, and burnup. Preparation of a cross section library for the code BOXER from a basic library in ENDF/B format (ETOBOX).

  15. Calculators and Polynomial Evaluation.

    ERIC Educational Resources Information Center

    Weaver, J. F.

    The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…

  16. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  17. Molecular Modeling of Ammonium, Calcium, Sulfur, and Sodium Lignosulphonates in Acid and Basic Aqueous Environments

    NASA Astrophysics Data System (ADS)

    Salazar Valencia, P. J.; Bolívar Marinez, L. E.; Pérez Merchancano, S. T.

    2015-12-01

    Lignosulphonates (LS), also known as lignin sulfonates or sulfite lignin, are lignins in sulfonated forms, obtained from the "sulfite liquors," a residue of the wood pulp extraction process. Their main utility lies in its wide range of properties, they can be used as additives, dispersants, binders, fluxing, binder agents, etc. in fields ranging from food to fertilizer manufacture and even as agents in the preparation of ion exchange membranes. Since they can be manufactured relatively easy and quickly, and that its molecular size can be manipulated to obtain fragments of very low molecular weight, they are used as transport agents in the food industry, cosmetics, pharmaceutical and drug development, and as molecular elements for the treatment of health problems. In this paper, we study the electronic structural and optical characteristics of LS incorporating ammonium, sulfur, calcium, and sodium ions in acidic and basic aqueous media in order to gain a better understanding of their behavior and the very interesting properties exhibit. The studies were performed using the molecular modeling program HyperChem 5 using the semiempirical method PM3 of the NDO Family (neglect of differential overlap), to calculate the structural properties. We calculated the electronic and optical properties using the semiempirical method ZINDO / CI.

  18. Energetics of basic karate kata.

    PubMed

    Bussweiler, Jens; Hartmann, Ulrich

    2012-12-01

    Knowledge about energy requirements during exercises seems necessary to develop training concepts in combat sport Karate. It is a commonly held view that the anaerobic lactic energy metabolism plays a key role, but this assumption could not be confirmed so far. The metabolic cost and fractional energy supply of basic Karate Kata (Heian Nidan, Shotokan style) with duration of about 30 s were analyzed. Six male Karateka [mean ± SD (age 29 ± 8 years; height 177 ± 5 cm, body mass 75 ± 9 kg)] with different training experience (advanced athletes, experts, elite athletes) were examined while performing one time and two time continuously the sport-specific movements. During Kata performance oxygen uptake was measured with a portable spirometric device, blood lactate concentrations were examined before and after testing and fractional energy supply was calculated. The results have shown that on average 52 % of the energy supply for one Heian Nidan came from anaerobic alactic metabolism, 25 % from anaerobic lactic and 23 % from aerobic metabolism. For two sequentially executed Heian Nidan and thus nearly doubling the duration, the calculated percentages were 33, 25 and 42 %. Total energy demand for one Kata and two Kata was approximately 61 and 99 kJ, respectively. Despite measured blood lactate concentrations up to 8.1 mmol l(-1), which might suggest a dominance of lactic energy supply, a lactic fraction of only 17-31 % during these relatively short and intense sequences could be found. A heavy use of lactic energy metabolism had to be rejected.

  19. Sodium sulfate - Deposition and dissolution of silica

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.

    1989-01-01

    The hot-corrosion process for SiO2-protected materials involves deposition of Na2SO4 and dissolution of the protective SiO2 scale. Dew points for Na2SO4 deposition are calculated as a function of pressure, sodium content, and sulfur content. Expected dissolution regimes for SiO2 are calculated as a function of Na2SO4 basicity. Controlled-condition burner-rig tests on quartz verify some of these predicted dissolution regimes. The basicity of Na2SO4 is not always a simple function of P(SO3). Electrochemical measurements of an (Na2O) show that carbon creates basic conditions in Na2SO4, which explains the extensive corrosion of SiO2-protected materials containing carbon, such as SiC.

  20. BOREHOLE FLOWMETERS: FIELD APPLICATION AND DATA ANALYSIS

    EPA Science Inventory

    This paper reviews application of borehole flowmeters in granular and fractured rocks. Basic data obtained in the field are the ambient flow log and the pumping-induced flow log. These basic logs may then be used to calculate other quantities of interest. The paper describes the ...

  1. A stable numerical solution method in-plane loading of nonlinear viscoelastic laminated orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1989-01-01

    In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.

  2. The methodic of calculation for the need of basic construction machines on construction site when developing organizational and technological documentation

    NASA Astrophysics Data System (ADS)

    Zhadanovsky, Boris; Sinenko, Sergey

    2018-03-01

    Economic indicators of construction work, particularly in high-rise construction, are directly related to the choice of optimal number of machines. The shortage of machinery makes it impossible to complete the construction & installation work on scheduled time. Rates of performance of construction & installation works and labor productivity during high-rise construction largely depend on the degree of provision of construction project with machines (level of work mechanization). During calculation of the need for machines in construction projects, it is necessary to ensure that work is completed on scheduled time, increased level of complex mechanization, increased productivity and reduction of manual work, and improved usage and maintenance of machine fleet. The selection of machines and determination of their numbers should be carried out by using formulas presented in this work.

  3. A Program for Calculating and Plotting Synthetic Common-Source Seismic-Reflection Traces for Multilayered Earth Models.

    ERIC Educational Resources Information Center

    Ramananantoandro, Ramanantsoa

    1988-01-01

    Presented is a description of a BASIC program to be used on an IBM microcomputer for calculating and plotting synthetic seismic-reflection traces for multilayered earth models. Discusses finding raypaths for given source-receiver offsets using the "shooting method" and calculating the corresponding travel times. (Author/CW)

  4. A Simple Formula for Quantiles on the TI-82/83 Calculator.

    ERIC Educational Resources Information Center

    Eisner, Milton P.

    1997-01-01

    The concept of percentile is a fundamental part of every course in basic statistics. Many such courses are now taught to students and require them to have TI-82 or TI-83 calculators. The functions defined in these calculators enable students to easily find the percentiles of a discrete data set. (PVD)

  5. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  6. Coupling of a structural analysis and flow simulation for short-fiber-reinforced polymers: property prediction and transfer of results

    NASA Astrophysics Data System (ADS)

    Kröner, C.; Altenbach, H.; Naumenko, K.

    2009-05-01

    The aim of this paper is to discuss the basic theories of interfaces able to transfer the results of an injection molding analyis of fiber-reinforced polymers, performed by using the commercial computer code Moldflow, to the structural analysis program ABAQUS. The elastic constants of the materials, such as Young's modulus, shear modulus, and Poisson's ratio, which depend on both the fiber content and the degree of fiber orientation, were calculated not by the usual method of "orientation averaging," but with the help of linear functions fitted to experimental data. The calculation and transfer of all needed data, such as material properties, geometry, directions of anisotropy, and so on, is performed by an interface developed. The interface is suit able for midplane elements in Moldflow. It calculates and transfers to ABAQUS all data necessary for the use of shell elements. In addition, a method is described how a nonlinear orthotropic behavior can be modeled starting from the generalized Hooke's law. It is also shown how such a model can be implemented in ABAQUS by means of a material subroutine. The results obtained according to this subroutine are compared with those based on an orthotropic, linear, elastic simulation.

  7. A versatile program for the calculation of linear accelerator room shielding.

    PubMed

    Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M

    2018-03-22

    This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.

  8. The ISEE-3 ULEWAT: Flux tape description and heavy ion fluxes 1978-1984. [plasma diagnostics

    NASA Technical Reports Server (NTRS)

    Mason, G. M.; Klecker, B.

    1985-01-01

    The ISEE ULEWAT FLUX tapes contain ULEWAT and ISEE pool tape data summarized over relatively long time intervals (1hr) in order to compact the data set into an easily usable size. (Roughly 3 years of data fit onto one 1600 BPI 9-track magnetic tape). In making the tapes, corrections were made to the ULEWAT basic data tapes in order to, remove rate spikes and account for changes in instrument response so that to a large extent instrument fluxes can be calculated easily from the FLUX tapes without further consideration of instrument performance.

  9. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  10. Health Literacy Impact on National Healthcare Utilization and Expenditure.

    PubMed

    Rasu, Rafia S; Bawa, Walter Agbor; Suminski, Richard; Snella, Kathleen; Warady, Bradley

    2015-08-17

    Health literacy presents an enormous challenge in the delivery of effective healthcare and quality outcomes. We evaluated the impact of low health literacy (LHL) on healthcare utilization and healthcare expenditure. Database analysis used Medical Expenditure Panel Survey (MEPS) from 2005-2008 which provides nationally representative estimates of healthcare utilization and expenditure. Health literacy scores (HLSs) were calculated based on a validated, predictive model and were scored according to the National Assessment of Adult Literacy (NAAL). HLS ranged from 0-500. Health literacy level (HLL) and categorized in 2 groups: Below basic or basic (HLS <226) and above basic (HLS ≥226). Healthcare utilization expressed as a physician, nonphysician, or emergency room (ER) visits and healthcare spending. Expenditures were adjusted to 2010 rates using the Consumer Price Index (CPI). A P value of 0.05 or less was the criterion for statistical significance in all analyses. Multivariate regression models assessed the impact of the predicted HLLs on outpatient healthcare utilization and expenditures. All analyses were performed with SAS and STATA® 11.0 statistical software. The study evaluated 22 599 samples representing 503 374 648 weighted individuals nationally from 2005-2008. The cohort had an average age of 49 years and included more females (57%). Caucasian were the predominant racial ethnic group (83%) and 37% of the cohort were from the South region of the United States of America. The proportion of the cohort with basic or below basic health literacy was 22.4%. Annual predicted values of physician visits, nonphysician visits, and ER visits were 6.6, 4.8, and 0.2, respectively, for basic or below basic compared to 4.4, 2.6, and 0.1 for above basic. Predicted values of office and ER visits expenditures were $1284 and $151, respectively, for basic or below basic and $719 and $100 for above basic (P < .05). The extrapolated national estimates show that the annual costs for prescription alone for adults with LHL possibly associated with basic and below basic health literacy could potentially reach about $172 billion. Health literacy is inversely associated with healthcare utilization and expenditure. Individuals with below basic or basic HLL have greater healthcare utilization and expendituresspending more on prescriptions compared to individuals with above basic HLL. Public health strategies promoting appropriate education among individuals with LHL may help to improve health outcomes and reduce unnecessary healthcare visits and costs. © 2015 by Kerman University of Medical Sciences.

  11. Design and Testing of a Liquid Nitrous Oxide and Ethanol Fueled Rocket Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youngblood, Stewart

    A small-scale, bi-propellant, liquid fueled rocket engine and supporting test infrastructure were designed and constructed at the Energetic Materials Research and Testing Center (EMRTC). This facility was used to evaluate liquid nitrous oxide and ethanol as potential rocket propellants. Thrust and pressure measurements along with high-speed digital imaging of the rocket exhaust plume were made. This experimental data was used for validation of a computational model developed of the rocket engine tested. The developed computational model was utilized to analyze rocket engine performance across a range of operating pressures, fuel-oxidizer mixture ratios, and outlet nozzle configurations. A comparative study ofmore » the modeling of a liquid rocket engine was performed using NASA CEA and Cantera, an opensource equilibrium code capable of being interfaced with MATLAB. One goal of this modeling was to demonstrate the ability of Cantera to accurately model the basic chemical equilibrium, thermodynamics, and transport properties for varied fuel and oxidizer operating conditions. Once validated for basic equilibrium, an expanded MATLAB code, referencing Cantera, was advanced beyond CEAs capabilities to predict rocket engine performance as a function of supplied propellant flow rate and rocket engine nozzle dimensions. Cantera was found to comparable favorably to CEA for making equilibrium calculations, supporting its use as an alternative to CEA. The developed rocket engine performs as predicted, demonstrating the developedMATLAB rocket engine model was successful in predicting real world rocket engine performance. Finally, nitrous oxide and ethanol were shown to perform well as rocket propellants, with specific impulses experimentally recorded in the range of 250 to 260 seconds.« less

  12. Hybrid and conventional hydrogen engine vehicles that meet EZEV emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aceves, S.M.; Smith, J.R.

    In this paper, a time-dependent engine model is used for predicting hydrogen engine efficiency and emissions. The model uses basic thermodynamic equations for the compression and expansion processes, along with an empirical correlation for heat transfer, to predict engine indicated efficiency. A friction correlation and a supercharger/turbocharger model are then used to calculate brake thermal efficiency. The model is validated with many experimental points obtained in a recent evaluation of a hydrogen research engine. A The validated engine model is then used to calculate fuel economy and emissions for three hydrogen-fueled vehicles: a conventional, a parallel hybrid, and a seriesmore » hybrid. All vehicles use liquid hydrogen as a fuel. The hybrid vehicles use a flywheel for energy storage. Comparable ultra capacitor or battery energy storage performance would give similar results. This paper analyzes the engine and flywheel sizing requirements for obtaining a desired level of performance. The results indicate that hydrogen lean-burn spark-ignited engines can provide a high fuel economy and Equivalent Zero Emission Vehicle (EZEV) levels in the three vehicle configurations being analyzed.« less

  13. Introduction to molecular topology: basic concepts and application to drug design.

    PubMed

    Gálvez, Jorge; Gálvez-Llompart, María; García-Domenech, Ramón

    2012-09-01

    In this review it is dealt the use of molecular topology (MT) in the selection and design of new drugs. After an introduction of the actual methods used for drug design, the basic concepts of MT are defined, including examples of calculation of topological indices, which are numerical descriptors of molecular structures. The goal is making this calculation familiar to the potential students and allowing a straightforward comprehension of the topic. Finally, the achievements obtained in this field are detailed, so that the reader can figure out the great interest of this approach.

  14. A Role for Weak Electrostatic Interactions in Peripheral Membrane Protein Binding

    PubMed Central

    Khan, Hanif M.; He, Tao; Fuglebakk, Edvin; Grauffel, Cédric; Yang, Boqian; Roberts, Mary F.; Gershenson, Anne; Reuter, Nathalie

    2016-01-01

    Bacillus thuringiensis phosphatidylinositol-specific phospholipase C (BtPI-PLC) is a secreted virulence factor that binds specifically to phosphatidylcholine (PC) bilayers containing negatively charged phospholipids. BtPI-PLC carries a negative net charge and its interfacial binding site has no obvious cluster of basic residues. Continuum electrostatic calculations show that, as expected, nonspecific electrostatic interactions between BtPI-PLC and membranes vary as a function of the fraction of anionic lipids present in the bilayers. Yet they are strikingly weak, with a calculated ΔGel below 1 kcal/mol, largely due to a single lysine (K44). When K44 is mutated to alanine, the equilibrium dissociation constant for small unilamellar vesicles increases more than 50 times (∼2.4 kcal/mol), suggesting that interactions between K44 and lipids are not merely electrostatic. Comparisons of molecular-dynamics simulations performed using different lipid compositions reveal that the bilayer composition does not affect either hydrogen bonds or hydrophobic contacts between the protein interfacial binding site and bilayers. However, the occupancies of cation-π interactions between PC choline headgroups and protein tyrosines vary as a function of PC content. The overall contribution of basic residues to binding affinity is also context dependent and cannot be approximated by a rule-of-thumb value because these residues can contribute to both nonspecific electrostatic and short-range protein-lipid interactions. Additionally, statistics on the distribution of basic amino acids in a data set of membrane-binding domains reveal that weak electrostatics, as observed for BtPI-PLC, might be a less unusual mechanism for peripheral membrane binding than is generally thought. PMID:27028646

  15. Neutrino Processes in Neutron Stars

    NASA Astrophysics Data System (ADS)

    Kolomeitsev, E. E.; Voskresensky, D. N.

    2010-10-01

    The aim of these lectures is to introduce basic processes responsible for cooling of neutron stars and to show how to calculate the neutrino production rate in dense strongly interacting nuclear medium. The formalism is presented that treats on equal footing one-nucleon and multiple-nucleon processes and reactions with virtual bosonic modes and condensates. We demonstrate that neutrino emission from dense hadronic component in neutron stars is subject of strong modifications due to collective effects in the nuclear matter. With the most important in-medium processes incorporated in the cooling code an overall agreement with available soft X ray data can be easily achieved. With these findings the so-called “standard” and “non-standard” cooling scenarios are replaced by one general “nuclear medium cooling scenario” which relates slow and rapid neutron star coolings to the star masses (interior densities). The lectures are split in four parts. Part I: After short introduction to the neutron star cooling problem we show how to calculate neutrino reaction rates of the most efficient one-nucleon and two-nucleon processes. No medium effects are taken into account in this instance. The effects of a possible nucleon pairing are discussed. We demonstrate that the data on neutron star cooling cannot be described without inclusion of medium effects. It motivates an assumption that masses of the neutron stars are different and that neutrino reaction rates should be strongly density dependent. Part II: We introduce the Green’s function diagram technique for systems in and out of equilibrium and the optical theorem formalism. The latter allows to perform calculations of production rates with full Green’s functions including all off-mass-shell effects. We demonstrate how this formalism works within the quasiparticle approximation. Part III: The basic concepts of the nuclear Fermi liquid approach are introduced. We show how strong interaction effects can be included within the Green’s function formalism. Softening of the pion mode with an baryon density increase is explicitly incorporated. We show examples of inconsistencies in calculations without inclusion of medium effects. Then we demonstrate calculations of different reaction rates in non-superfluid nuclear matter with taking into account medium effects. Many new reaction channels are open up in the medium and should be analyzed. Part IV: We discuss the neutrino production reactions in superfluid nuclear systems. The reaction rates of processes associated with the pair breaking and formation are calculated. Special attention is focused on the gauge invariance and the exact fulfillment of the Ward identities for the vector current. Finally we present comparison of calculations of neutron star cooling performed within nuclear medium cooling scenario with the available data.

  16. Basic Pharmaceutical Sciences Examination as a Predictor of Student Performance during Clinical Training.

    ERIC Educational Resources Information Center

    Fassett, William E.; Campbell, William H.

    1984-01-01

    A comparison of Basic Pharmaceutical Sciences Examination (BPSE) results with student performance evaluations in core clerkships, institutional and community externships, didactic and clinical courses, and related basic science coursework revealed the BPSE does not predict student performance during clinical instruction. (MSE)

  17. Test Review: Reynolds, C. R., Voress, J. V., Kamphaus, R. W. (2015), "Mathematics Fluency and Calculation Tests (MFaCTs) review." PRO-ED

    ERIC Educational Resources Information Center

    Marbach, Joshua

    2017-01-01

    The Mathematics Fluency and Calculation Tests (MFaCTs) are a series of measures designed to assess for arithmetic calculation skills and calculation fluency in children ages 6 through 18. There are five main purposes of the MFaCTs: (1) identifying students who are behind in basic math fact automaticity; (2) evaluating possible delays in arithmetic…

  18. A mathematical method for precisely calculating the radiographic angles of the cup after total hip arthroplasty.

    PubMed

    Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu

    2016-11-01

    We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. DiSCaMB: a software library for aspherical atom model X-ray scattering factor calculations with CPUs and GPUs.

    PubMed

    Chodkiewicz, Michał L; Migacz, Szymon; Rudnicki, Witold; Makal, Anna; Kalinowski, Jarosław A; Moriarty, Nigel W; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Adams, Paul D; Dominiak, Paulina Maria

    2018-02-01

    It has been recently established that the accuracy of structural parameters from X-ray refinement of crystal structures can be improved by using a bank of aspherical pseudoatoms instead of the classical spherical model of atomic form factors. This comes, however, at the cost of increased complexity of the underlying calculations. In order to facilitate the adoption of this more advanced electron density model by the broader community of crystallographers, a new software implementation called DiSCaMB , 'densities in structural chemistry and molecular biology', has been developed. It addresses the challenge of providing for high performance on modern computing architectures. With parallelization options for both multi-core processors and graphics processing units (using CUDA), the library features calculation of X-ray scattering factors and their derivatives with respect to structural parameters, gives access to intermediate steps of the scattering factor calculations (thus allowing for experimentation with modifications of the underlying electron density model), and provides tools for basic structural crystallographic operations. Permissively (MIT) licensed, DiSCaMB is an open-source C++ library that can be embedded in both academic and commercial tools for X-ray structure refinement.

  20. Feature extraction of performance variables in elite half-pipe snowboarding using body mounted inertial sensors

    NASA Astrophysics Data System (ADS)

    Harding, J. W.; Small, J. W.; James, D. A.

    2007-12-01

    Recent analysis of elite-level half-pipe snowboard competition has revealed a number of sport specific key performance variables (KPV's) that correlate well to score. Information on these variables is difficult to acquire and analyse, relying on collection and labour intensive manual post processing of video data. This paper presents the use of inertial sensors as a user-friendly alternative and subsequently implements signal processing routines to ultimately provide automated, sport specific feedback to coaches and athletes. The author has recently shown that the key performance variables (KPV's) of total air-time (TAT) and average degree of rotation (ADR) achieved during elite half-pipe snowboarding competition show strong correlation with an athlete's subjectively judged score. Utilising Micro-Electrochemical System (MEMS) sensors (tri-axial accelerometers) this paper demonstrates that air-time (AT) achieved during half-pipe snowboarding can be detected and calculated accurately using basic signal processing techniques. Characterisation of the variations in aerial acrobatic manoeuvres and the associated calculation of exact degree of rotation (DR) achieved is a likely extension of this research. The technique developed used a two-pass method to detect locations of half-pipe snowboard runs using power density in the frequency domain and subsequently utilises a threshold based search algorithm in the time domain to calculate air-times associated with individual aerial acrobatic manoeuvres. This technique correctly identified the air-times of 100 percent of aerial acrobatic manoeuvres within each half-pipe snowboarding run (n = 92 aerial acrobatic manoeuvres from 4 subjects) and displayed a very strong correlation with a video based reference standard for air-time calculation (r = 0.78 +/- 0.08; p value < 0.0001; SEE = 0.08 ×/÷ 1.16; mean bias = -0.03 +/- 0.02s) (value +/- or ×/÷ 95% CL).

  1. A hybrid deep neural network and physically based distributed model for river stage prediction

    NASA Astrophysics Data System (ADS)

    hitokoto, Masayuki; sakuraba, Masaaki

    2016-04-01

    We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.

  2. HYPERDATA - BASIC HYPERSONIC DATA AND EQUATIONS

    NASA Technical Reports Server (NTRS)

    Mackall, D.

    1994-01-01

    In an effort to place payloads into orbit at the lowest possible costs, the use of air-breathing space-planes, which reduces the need to carry the propulsion system oxidizer, has been examined. As this approach would require the space-plane to fly at hypersonic speeds for periods of time much greater than that required by rockets, many factors must be considered when analyzing its benefits. The Basic Hypersonic Data and Equations spreadsheet provides data gained from three analyses of a space-plane's performance. The equations used to perform the analyses are derived from Newton's second law of physics (i.e. force equals mass times acceleration); the derivation is included. The first analysis is a parametric study of some basic factors affecting the ability of a space-plane to reach orbit. This step calculates the fraction of fuel mass to the total mass of the space-plane at takeoff. The user is able to vary the altitude, the heating value of the fuel, the orbital gravity, and orbital velocity. The second analysis calculates the thickness of a spherical fuel tank, while assuming all of the mass of the vehicle went into the tank's shell. This provides a first order analysis of how much material results from a design where the fuel represents a large portion of the total vehicle mass. In this step, the user is allowed to vary the values for gross weight, material density, and fuel density. The third analysis produces a ratio of gallons of fuel per total mass for various aircraft. It shows that the volume of fuel required by the space-plane relative to the total mass is much larger for a liquid hydrogen space-plane than any other vehicle made. This program is a spreadsheet for use on Macintosh series computers running Microsoft Excel 3.0. The standard distribution medium for this package is a 3.5 inch 800K Macintosh format diskette. Documentation is included in the price of the program. Macintosh is a registered trademark of Apple Computer, Inc. Microsoft is a registered trademark of Microsoft Corporation.

  3. Spin-Polarized Tunneling through Chemical Vapor Deposited Multilayer Molybdenum Disulfide.

    PubMed

    Dankert, André; Pashaei, Parham; Kamalakar, M Venkata; Gaur, Anand P S; Sahoo, Satyaprakash; Rungger, Ivan; Narayan, Awadhesh; Dolui, Kapildeb; Hoque, Md Anamul; Patel, Ram Shanker; de Jong, Michel P; Katiyar, Ram S; Sanvito, Stefano; Dash, Saroj P

    2017-06-27

    The two-dimensional (2D) semiconductor molybdenum disulfide (MoS 2 ) has attracted widespread attention for its extraordinary electrical-, optical-, spin-, and valley-related properties. Here, we report on spin-polarized tunneling through chemical vapor deposited multilayer MoS 2 (∼7 nm) at room temperature in a vertically fabricated spin-valve device. A tunnel magnetoresistance (TMR) of 0.5-2% has been observed, corresponding to spin polarization of 5-10% in the measured temperature range of 300-75 K. First-principles calculations for ideal junctions result in a TMR up to 8% and a spin polarization of 26%. The detailed measurements at different temperature, bias voltages, and density functional theory calculations provide information about spin transport mechanisms in vertical multilayer MoS 2 spin-valve devices. These findings form a platform for exploring spin functionalities in 2D semiconductors and understanding the basic phenomena that control their performance.

  4. Dynamic stability experiment of Maglev systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Mulcahy, T.M.; Chen, S.S.

    1995-04-01

    This report summarizes the research performed on Maglev vehicle dynamic stability at Argonne National Laboratory during the past few years. It also documents magnetic-force data obtained from both measurements and calculations. Because dynamic instability is not acceptable for any commercial Maglev system, it is important to consider this phenomenon in the development of all Maglev systems. This report presents dynamic stability experiments on Maglev systems and compares their numerical simulation with predictions calculated by a nonlinear dynamic computer code. Instabilities of an electrodynamic system (EDS)-type vehicle model were obtained from both experimental observations and computer simulations for a five-degree-of-freedom Maglevmore » vehicle moving on a guideway consisting of double L-shaped aluminum segments attached to a rotating wheel. The experimental and theoretical analyses developed in this study identify basic stability characteristics and future research needs of Maglev systems.« less

  5. Dynamic stability of repulsive-force maglev suspension systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Rote, D.M.; Mulcahy, T.M.

    1996-11-01

    This report summarizes the research performed on maglev vehicle dynamic stability at Argonne National Laboratory during the past few years. It also documents both measured and calculated magnetic-force data. Because dynamic instability is not acceptable for any commercial maglev system, it is important to consider this phenomenon in the development of all maglev systems. This report presents dynamic stability experiments on maglev systems and compares the results with predictions calculated by a nonlinear-dynamics computer code. Instabilities of an electrodynamic-suspension system type vehicle model were obtained by experimental observation and computer simulation of a five-degree-of-freedom maglev vehicle moving on a guidewaymore » that consists of a pair of L-shaped aluminum conductors attached to a rotating wheel. The experimental and theoretical analyses developed in this study identify basic stability characteristics and future research needs of maglev systems.« less

  6. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction

    PubMed Central

    Cruz Zurian, Heber; Atefi, Seyed Reza; Seoane Martinez, Fernando; Lukowicz, Paul

    2017-01-01

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers. PMID:29120389

  7. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.

    PubMed

    Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul

    2017-11-09

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.

  8. First principles calculations of thermal conductivity with out of equilibrium molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.

  9. Toast, Anyone? Project Teaches Electricity Basics and Math

    ERIC Educational Resources Information Center

    Quagliana, David F.

    2010-01-01

    This article describes an electrical technology experiment that shows students how to determine the cost of using an electrical appliance. The experiment also provides good math practice and teaches basic electricity terms and concepts, such as volt, ampere, watt, kilowatt, and kilowatt-hour. This experiment could be expanded to calculate the cost…

  10. Capabilities and the Global Challenges of Girls' School Enrolment and Women's Literacy

    ERIC Educational Resources Information Center

    Cameron, John

    2012-01-01

    The education Millennium Development Goals have been highly influential on the priorities for education and concentrated policy efforts on numbers of girls enrolled in public sector schools offering basic education. This focus has been justified by human capital calculations of the social rates of return to basic schooling. This concern with…

  11. Introduction to Chemistry for Water and Wastewater Treatment Plant Operators. Water and Wastewater Training Program.

    ERIC Educational Resources Information Center

    South Dakota Dept. of Environmental Protection, Pierre.

    Presented are basic concepts of chemistry necessary for operators who manage drinking water treatment plants and wastewater facilities. It includes discussions of chemical terms and concepts, laboratory procedures for basic analyses of interest to operators, and discussions of appropriate chemical calculations. Exercises are included and answer…

  12. A density functional theory study of the correlation between analyte basicity, ZnPc adsorption strength, and sensor response.

    PubMed

    Tran, N L; Bohrer, F I; Trogler, W C; Kummel, A C

    2009-05-28

    Density functional theory (DFT) simulations were used to determine the binding strength of 12 electron-donating analytes to the zinc metal center of a zinc phthalocyanine molecule (ZnPc monomer). The analyte binding strengths were compared to the analytes' enthalpies of complex formation with boron trifluoride (BF(3)), which is a direct measure of their electron donating ability or Lewis basicity. With the exception of the most basic analyte investigated, the ZnPc binding energies were found to correlate linearly with analyte basicities. Based on natural population analysis calculations, analyte complexation to the Zn metal of the ZnPc monomer resulted in limited charge transfer from the analyte to the ZnPc molecule, which increased with analyte-ZnPc binding energy. The experimental analyte sensitivities from chemiresistor ZnPc sensor data were proportional to an exponential of the binding energies from DFT calculations consistent with sensitivity being proportional to analyte coverage and binding strength. The good correlation observed suggests DFT is a reliable method for the prediction of chemiresistor metallophthalocyanine binding strengths and response sensitivities.

  13. Nuclear criticality safety calculational analysis for small-diameter containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeTellier, M.S.; Smallwood, D.J.; Henkel, J.A.

    This report documents calculations performed to establish a technical basis for the nuclear criticality safety of favorable geometry containers, sometimes referred to as 5-inch containers, in use at the Portsmouth Gaseous Diffusion Plant. A list of containers currently used in the plant is shown in Table 1.0-1. These containers are currently used throughout the plant with no mass limits. The use of containers with geometries or material types other than those addressed in this evaluation must be bounded by this analysis or have an additional analysis performed. The following five basic container geometries were modeled and bound all container geometriesmore » in Table 1.0-1: (1) 4.32-inch-diameter by 50-inch-high polyethylene bottle; (2) 5.0-inch-diameter by 24-inch-high polyethylene bottle; (3) 5.25-inch-diameter by 24-inch-high steel can ({open_quotes}F-can{close_quotes}); (4) 5.25-inch-diameter by 15-inch-high steel can ({open_quotes}Z-can{close_quotes}); and (5) 5.0-inch-diameter by 9-inch-high polybottle ({open_quotes}CO-4{close_quotes}). Each container type is evaluated using five basic reflection and interaction models that include single containers and multiple containers in normal and in credible abnormal conditions. The uranium materials evaluated are UO{sub 2}F{sub 2}+H{sub 2}O and UF{sub 4}+oil materials at 100% and 10% enrichments and U{sub 3}O{sub 8}, and H{sub 2}O at 100% enrichment. The design basis safe criticality limit for the Portsmouth facility is k{sub eff} + 2{sigma} < 0.95. The KENO study results may be used as the basis for evaluating general use of these containers in the plant.« less

  14. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

    ERIC Educational Resources Information Center

    Noser, Thomas C.; Tanner, John R.; Shah, Situl

    2008-01-01

    The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

  15. Development of a Whole-Body Haptic Sensor with Multiple Supporting Points and Its Application to a Manipulator

    NASA Astrophysics Data System (ADS)

    Hanyu, Ryosuke; Tsuji, Toshiaki

    This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.

  16. Particle size and interfacial effects on heat transfer characteristics of water and {alpha}-SiC nanofluids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeeva, E.; Smith, D. S.; Yu, W.

    2010-01-01

    The effect of average particle sizes on basic macroscopic properties and heat transfer performance of {alpha}-SiC/water nanofluids was investigated. The average particle sizes, calculated from the specific surface area of nanoparticles, were varied from 16 to 90 nm. Nanofluids with larger particles of the same material and volume concentration provide higher thermal conductivity and lower viscosity increases than those with smaller particles because of the smaller solid/liquid interfacial area of larger particles. It was also demonstrated that the viscosity of water-based nanofluids can be significantly decreased by pH of the suspension independently from the thermal conductivity. Heat transfer coefficients weremore » measured and compared to the performance of base fluids as well as to nanofluids reported in the literature. Criteria for evaluation of the heat transfer performance of nanofluids are discussed and optimum directions in nanofluid development are suggested.« less

  17. Experimental and Modeling Characterization of PETN Mobilization Mechanisms During Recrystallization at Ambient Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, A K; Gee, R; Maiti, A

    2005-11-03

    Experimental measurements suggest that pentaerythritoltetranitrate (PETN) undergoes changes at the molecular level that cause macroscopic changes in the overall PETN powder characteristics over time. These changes have been attributed to the high molecular mobility of PETN, but the underlying mechanism(s) responsible for this redistribution are still uncertain. Two basic approaches have been implemented in the past year to provide insight into the nature of these underlying mechanisms. The first approach is of an experimental nature, utilizing both AFM and evaporation measurements, which address both surface mobility and evaporation. These data include AFM measurements performed at LLNL and evaporation rate measurementsmore » performed at Texas Tech. These results are compared to earlier vapor pressure measurements performed at SNL, and estimates of recrystallization time frames are given. The second approach utilizes first-principle calculations and simulations that will be used to compare directly to those experimental quantities measured. We are developing an accurate intermolecular potential for PETN, which via kinetic Monte Carlo (KMC) simulations would mimic real crystallite shapes. Once the basic theory is in place for the growth of single crystallites, we will be in a position to investigate realistic grain coarsening phenomena in multi-crystallite simulations. This will also enable us to study how to control the morphological evolution, e.g., through thermal cycling, or through the action of custom additives and impurities.« less

  18. Basic coaxial mass driver reference design. [electromagnetic lunar launch

    NASA Technical Reports Server (NTRS)

    Kolm, H. H.

    1977-01-01

    The reference design for a basic coaxial mass driver is developed to illustrate the principles and optimization procedures on the basis of numerical integration by programmable pocket calculators. The four inch caliber system uses a single-coil bucket and a single-phase propulsion track with discrete coils, separately energized by capacitors. An actual driver would use multiple-coil buckets and an oscillatory multi-phase drive system. Even the basic, table-top demonstration system should in principle be able to achieve accelerations in the 1,000 m/sq sec range. Current densities of the order of 25 ka/sq cm, continuously achievable only in superconductors, are carried by an ordinary aluminum bucket coil for a short period in order to demonstrate the calculated acceleration. Ultimately the system can be lengthened and provided with a magnetically levitated, superconducting bucket to study levitation dynamics under quasi-steady-state conditions, and to approach lunar escape velocity in an evacuated tube.

  19. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1984-06-01

    Management System for the TI - 59 Programmable Calculator Kersh, T. B. Signal Processor Interface 65 CPT, USA Simulation of the AN/SPY-lA Radar...DESIGN AND IMPLEMENTATION OF A BASIC CROSS-COMPILER AND VIRTUAL MEMORY MANAGEMENT SYSTEM FOR THE TI - 59 PROGRAMMABLE CALCULATOR Mark R. Kindl Captain...Academy, 1974 The instruction set of the TI - 59 Programmable Calculator bears a close similarity to that of an assembler. Though most of the calculator

  20. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R

    PubMed Central

    Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763

  1. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    PubMed

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  2. Program helps quickly calculate deviated well path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, M.P.

    1993-11-22

    A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less

  3. Dynamical friction for supersonic motion in a homogeneous gaseous medium

    NASA Astrophysics Data System (ADS)

    Thun, Daniel; Kuiper, Rolf; Schmidt, Franziska; Kley, Wilhelm

    2016-05-01

    Context. The supersonic motion of gravitating objects through a gaseous ambient medium constitutes a classical problem in theoretical astrophysics. Its application covers a broad range of objects and scales from planetesimals, planets, and all kind of stars up to galaxies and black holes. In particular, the dynamical friction caused by the wake that forms behind the object plays an important role for the dynamics of the system. To calculate the dynamical friction for a particular system, standard formulae based on linear theory are often used. Aims: It is our goal to check the general validity of these formulae and provide suitable expressions for the dynamical friction acting on the moving object, based on the basic physical parameters of the problem: first, the mass, radius, and velocity of the perturber; second, the gas mass density, soundspeed, and adiabatic index of the gaseous medium; and finally, the size of the forming wake. Methods: We perform dedicated sequences of high-resolution numerical studies of rigid bodies moving supersonically through a homogeneous ambient medium and calculate the total drag acting on the object, which is the sum of gravitational and hydrodynamical drag. We study cases without gravity with purely hydrodynamical drag, as well as gravitating objects. In various numerical experiments, we determine the drag force acting on the moving body and its dependence on the basic physical parameters of the problem, as given above. From the final equilibrium state of the simulations, for gravitating objects we compute the dynamical friction by direct numerical integration of the gravitational pull acting on the embedded object. Results: The numerical experiments confirm the known scaling laws for the dependence of the dynamical friction on the basic physical parameters as derived in earlier semi-analytical studies. As a new important result we find that the shock's stand-off distance is revealed as the minimum spatial interaction scale of dynamical friction. Below this radius, the gas settles into a hydrostatic state, which - owing to its spherical symmetry - causes no net gravitational pull onto the moving body. Finally, we derive an analytic estimate for the stand-off distance that can easily be used when calculating the dynamical friction force.

  4. MO-F-204-00: Preparing for the ABR Diagnostic and Nuclear Medical Physics Exams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  5. MO-F-204-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szczykutowicz, T.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  6. MO-F-204-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zambelli, J.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  7. MO-F-204-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKenney, S.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  8. MO-F-204-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  9. WE-D-213-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  10. WE-D-213-00: Preparing for the ABR Diagnostic and Nuclear Medicine Physics Exams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  11. WE-D-213-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, S.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  12. WE-D-213-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, N.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  13. WE-D-213-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zambelli, J.

    Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less

  14. The Effect of Instructional Method on Cardiopulmonary Resuscitation Skill Performance: A Comparison Between Instructor-Led Basic Life Support and Computer-Based Basic Life Support With Voice-Activated Manikin.

    PubMed

    Wilson-Sands, Cathy; Brahn, Pamela; Graves, Kristal

    2015-01-01

    Validating participants' ability to correctly perform cardiopulmonary resuscitation (CPR) skills during basic life support courses can be a challenge for nursing professional development specialists. This study compares two methods of basic life support training, instructor-led and computer-based learning with voice-activated manikins, to identify if one method is more effective for performance of CPR skills. The findings suggest that a computer-based learning course with voice-activated manikins is a more effective method of training for improved CPR performance.

  15. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  16. Comprehension of the Electric Polarization as a Function of Low Temperature

    NASA Astrophysics Data System (ADS)

    Liu, Changshi

    2017-01-01

    Polarization response to warming plays an increasingly important role in a number of ferroelectric memory devices. This paper reports on the theoretical explanation of the relationship between polarization and temperature. According to the Fermi-Dirac distribution, the basic property of electric polarization response to temperature in magnetoelectric multiferroic materials is theoretically analyzed. The polarization in magnetoelectric multiferroic materials can be calculated by low temperature using a phenomenological theory suggested in this paper. Simulation results revealed that the numerically calculated results are in good agreement with experimental results of some inhomogeneous multiferroic materials. Numerical simulations have been performed to investigate the influences of both electric and magnetic fields on the polarization in magnetoelectric multiferroic materials. Furthermore, polarization behavior of magnetoelectric multiferroic materials can be predicted by low temperature, electric field and magnetic induction using only one function. The calculations offer an insight into the understanding of the effects of heating and magnetoelectric field on electrical properties of multiferroic materials and offer a potential to use similar methods to analyze electrical properties of other memory devices.

  17. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  18. Heartwood and sapwood in eucalyptus trees: non-conventional approach to wood quality.

    PubMed

    Cherelli, Sabrina G; Sartori, Maria Márcia P; Próspero, André G; Ballarin, Adriano W

    2018-01-01

    This study evaluated the quality of heartwood and sapwood from mature trees of three species of Eucalyptus, by means of the qualification of their proportion, determination of basic and apparent density using non-destructive attenuation of gamma radiation technique and calculation of the density uniformity index. Six trees of each species (Eucalyptus grandis - 18 years old, Eucalyptus tereticornis - 35 years old and Corymbia citriodora - 28 years old) were used in the experimental program. The heartwood and sapwood were delimited by macroscopic analysis and the calculation of areas and percentage of heartwood and sapwood were performed using digital image. The uniformity index was calculated following methodology which numerically quantifies the dispersion of punctual density values of the wood around the mean density along the radius. The percentage of the heartwood was higher than the sapwood in all species studied. The density results showed no statistical difference between heartwood and sapwood. Differently from the density results, in all species studied there was statistical differences between uniformity indexes for heartwood and sapwood regions, making justifiable the inclusion of the density uniformity index as a quality parameter for Eucalyptus wood.

  19. Photovoltaic performance models - A report card

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  20. MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Changho; Yang, Won Sik

    This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less

  1. Definition of hydraulic stability of KVGM-100 hot-water boiler and minimum water flow rate

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Ozerov, A. N.; Usikov, N. V.; Shkondin, I. A.

    2016-08-01

    In domestic power engineering, the methods of quantitative and qualitative-quantitative adjusting the load of the heat supply systems are widely distributed; furthermore, during the greater part of the heating period, the actual discharge of network water is less than estimated values when changing to quantitative adjustment. Hence, the hydraulic circuits of hot-water boilers should ensure the water velocities, minimizing the scale formation and excluding the formation of stagnant zones. The results of the calculations of hot-water KVGM-100 boiler and minimum water flow rate for the basic and peak modes at the fulfillment of condition of the lack of surface boil are presented in the article. The minimal flow rates of water at its underheating to the saturation state and the thermal flows in the furnace chamber were defined. The boiler hydraulic calculation was performed using the "Hydraulic" program, and the analysis of permissible and actual velocities of the water movement in the pipes of the heating surfaces was carried out. Based on the thermal calculations of furnace chamber and thermal- hydraulic calculations of heating surfaces, the following conclusions were drawn: the minimum velocity of water movement (by condition of boiling surface) at lifting movement of environment increases from 0.64 to 0.79 m/s; it increases from 1.14 to 1.38 m/s at down movement of environmental; the minimum water flow rate by the boiler in the basic mode (by condition of the surface boiling) increased from 887 t/h at the load of 20% up to 1074 t/h at the load of 100%. The minimum flow rate is 1074 t/h at nominal load and is achieved at the pressure at the boiler outlet equal to 1.1 MPa; the minimum water flow rate by the boiler in the peak mode by condition of surface boiling increases from 1669 t/h at the load of 20% up to 2021 t/h at the load of 100%.

  2. Computer program for calculation of oxygen uptake

    NASA Technical Reports Server (NTRS)

    Castle, B. L.; Castle, G.; Greenleaf, J. E.

    1979-01-01

    A description and operational precedures are presented for a computer program, written in Super Basic, that calculates oxygen uptake, carbon dioxide production, and related ventilation parameters. Program features include: (1) the option of entering slope and intercept values of calibration curves for the O2 and CO2 and analyzers; (2) calculation of expired water vapor pressure; and (3) the option of entering inspured O2 and CO2 concentrations. The program is easily adaptable for programmable laboratory calculators.

  3. Intraindividual variability in basic reaction time predicts middle-aged and older pilots' flight simulator performance.

    PubMed

    Kennedy, Quinn; Taylor, Joy; Heraldez, Daniel; Noda, Art; Lazzeroni, Laura C; Yesavage, Jerome

    2013-07-01

    Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Two-hundred and thirty-six pilots (40-69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%-12% of the negative age effect on initial flight performance. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance.

  4. Intraindividual Variability in Basic Reaction Time Predicts Middle-Aged and Older Pilots’ Flight Simulator Performance

    PubMed Central

    2013-01-01

    Objectives. Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Method. Two-hundred and thirty-six pilots (40–69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Results. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%–12% of the negative age effect on initial flight performance. Discussion. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance. PMID:23052365

  5. Theory and experiment on charging and discharging a capacitor through a reverse-biased diode

    NASA Astrophysics Data System (ADS)

    Roy, Arijit; Mallick, Abhishek; Adhikari, Aparna; Guin, Priyanka; Chatterjee, Dibyendu

    2018-06-01

    The beauty of a diode lies in its voltage-dependent nonlinear resistance. The voltage on a charging and discharging capacitor through a reverse-biased diode is calculated from basic equations and is found to be in good agreement with experimental measurements. Instead of the exponential dependence of charging and discharging voltages with time for a resistor-capacitor circuit, a linear time dependence is found when the resistor is replaced by a reverse-biased diode. Thus, well controlled positive and negative ramp voltages are obtained from the charging and discharging diode-capacitor circuits. This experiment can readily be performed in an introductory physics and electronics laboratory.

  6. FoilSim: Basic Aerodynamics Software Created

    NASA Technical Reports Server (NTRS)

    Peterson, Ruth A.

    1999-01-01

    FoilSim is interactive software that simulates the airflow around various shapes of airfoils. The graphical user interface, which looks more like a video game than a learning tool, captures and holds the students interest. The software is a product of NASA Lewis Research Center s Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program (HPCCP).This airfoil view panel is a simulated view of a wing being tested in a wind tunnel. As students create new wing shapes by moving slider controls that change parameters, the software calculates their lift. FoilSim also displays plots of pressure or airspeed above and below the airfoil surface.

  7. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  8. Software for Data Analysis with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Roy, H. Scott

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  9. Extremes in Oxidizing Power, Acidity, and Basicity

    DTIC Science & Technology

    2013-10-01

    and extremely difficult to oxidize, with reversible redox potentials calculated up to 5 V above ferrocene /ferricenium. In liquid sulfur dioxide, the...ol, the undecafluorinated anion is oxidized reversibly at 2.43 V above ferrocene /ferricenium (calculated 2.40 V) but the radical is too unstable for...unusually weakly nucleophilic and extremely difficult to oxidize, with reversible redox potentials calculated up to 5 V above ferrocene /ferricenium. In

  10. Incorporation of Hydrogen Bond Angle Dependency into the Generalized Solvation Free Energy Density Model.

    PubMed

    Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai

    2018-04-23

    To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described as compared to that of the G-SFED model.

  11. Carbonyl Activation by Borane Lewis Acid Complexation: Transition States of H2 Splitting at the Activated Carbonyl Carbon Atom in a Lewis Basic Solvent and the Proton-Transfer Dynamics of the Boroalkoxide Intermediate.

    PubMed

    Heshmat, Mojgan; Privalov, Timofei

    2017-07-06

    By using transition-state (TS) calculations, we examined how Lewis acid (LA) complexation activates carbonyl compounds in the context of hydrogenation of carbonyl compounds by H 2 in Lewis basic (ethereal) solvents containing borane LAs of the type (C 6 F 5 ) 3 B. According to our calculations, LA complexation does not activate a ketone sufficiently enough for the direct addition of H 2 to the O=C unsaturated bond; but, calculations indicate a possibly facile heterolytic cleavage of H 2 at the activated and thus sufficiently Lewis acidic carbonyl carbon atom with the assistance of the Lewis basic solvent (i.e., 1,4-dioxane or THF). For the solvent-assisted H 2 splitting at the carbonyl carbon atom of (C 6 F 5 ) 3 B adducts with different ketones, a number of TSs are computed and the obtained results are related to insights from experiment. By using the Born-Oppenheimer molecular dynamics with the DFT for electronic structure calculations, the evolution of the (C 6 F 5 ) 3 B-alkoxide ionic intermediate and the proton transfer to the alkoxide oxygen atom were investigated. The results indicate a plausible hydrogenation mechanism with a LA, that is, (C 6 F 5 ) 3 B, as a catalyst, namely, 1) the step of H 2 cleavage that involves a Lewis basic solvent molecule plus the carbonyl carbon atom of thermodynamically stable and experimentally identifiable (C 6 F 5 ) 3 B-ketone adducts in which (C 6 F 5 ) 3 B is the "Lewis acid promoter", 2) the transfer of the solvent-bound proton to the oxygen atom of the (C 6 F 5 ) 3 B-alkoxide intermediate giving the (C 6 F 5 ) 3 B-alcohol adduct, and 3) the S N 2-style displacement of the alcohol by a ketone or a Lewis basic solvent molecule. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Guanidinophosphazenes: design, synthesis, and basicity in THF and in the gas phase.

    PubMed

    Kolomeitsev, Alexander A; Koppel, Ilmar A; Rodima, Toomas; Barten, Jan; Lork, Enno; Röschenthaler, Gerd-Volker; Kaljurand, Ivari; Kütt, Agnes; Koppel, Ivar; Mäemets, Vahur; Leito, Ivo

    2005-12-21

    A principle for creating a new generation of nonionic superbases is presented. It is based on attachment of tetraalkylguanidino, 1,3-dimethylimidazolidine-2-imino, or bis(tetraalkylguanidino)carbimino groups to the phosphorus atom of the iminophosphorane group using tetramethylguanidine or easily available 1,3-dimethylimidazolidine-2-imine. Seven new nonionic superbasic phosphazene bases, tetramethylguanidino-substituted at the P atom, have been synthesized. Their base strengths are established in tetrahydrofuran (THF) solution by means of spectrophotometric titration and compared with those of eight reference superbases designed specially for this study, P2- and P4-iminophosphoranes. The gas-phase basicities of several guanidino- and N',N',N'',N''-tetramethylguanidino (tmg)-substituted phosphazenes and their cyclic analogues are calculated, and the crystal structures of (tmg)3P=N-t-Bu and (tmg)3P=N-t-Bu x HBF4 are determined. The enormous basicity-increasing effect of this principle is experimentally verified for the tetramethylguanidino groups in the THF medium: the basicity increase when moving from (dma)3P=N-t-Bu (pKalpha = 18.9) to (tmg)3P=N-t-Bu (pKalpha = 29.1) is 10 orders of magnitude. A significantly larger basicity increase (up to 20 powers of 10) is expected (based on the high-level density functional theory calculations) to accompany the similar gas-phase transfer between the (dma)3P=NH and (tmg)3P=NH bases. Far stronger basicities still are expected when, in the latter two compounds, all three dimethylamino (or tetramethylguanidino) fragments are replaced by methylated triguanide fragments, (tmg)2C=N-. The gas-phase basicity (around 300-310 kcal/mol) of the resulting base, [(tmg)2C=N-]3P=NH, having only one phosphorus atom, is predicted to exceed the basicity of (dma)3P=NH by more than 40 powers of 10 and to surpass also the basicity of the widely used commercial [(dma)3P=N]3P=N-t-Bu (t-BuP4) superbase.

  13. Possibility of Engineering Education That Makes Use of Algebraic Calculators by Various Scenes

    NASA Astrophysics Data System (ADS)

    Umeno, Yoshio

    Algebraic calculators are graphing calculators with a feature of computer algebra system. It can be said that we can solve mathematics only by pushing some keys of these calculators in technical colleges or universities. They also possess another feature, so we can make extensive use in engineering education. For example, we can use them for a basic education, a programming education, English education, and creative thinking tools for excellent students. In this paper, we will introduce the summary of algebraic calculators, then, consider how we utilize them in engineer education.

  14. Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.

    ERIC Educational Resources Information Center

    Venugopalan, Mundiyath

    1991-01-01

    Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…

  15. BASIC INVESTIGATIONS IN PHOTOPOTENTIOMETRY.

    DTIC Science & Technology

    favorably with potentials calculated from the Nernst equation . The potentials are produced by a mechanism resembling a concentration cell with...transference. The effects of temperature and concentration are well defined by the Nernst equation . The observed potential at any time during the irradiation...is approximated by a potential calculated from the Nernst equation . (Author)

  16. Not Just for Computation: Basic Calculators Can Advance the Process Standards

    ERIC Educational Resources Information Center

    Moss, Laura J.; Grover, Barbara W.

    2007-01-01

    Simple nongraphing calculators can be powerful tools to enhance students' conceptual understanding of mathematics concepts. Students have opportunities to develop (1) a broad repertoire of problem-solving strategies by observing multiple solution strategies; (2) respect for other students' abilities and ways of thinking about mathematics; (3) the…

  17. Formation and nitrile hydrogenation performance of Ru nanoparticles on a K-doped Al2O3 surface.

    PubMed

    Muratsugu, Satoshi; Kityakarn, Sutasinee; Wang, Fei; Ishiguro, Nozomu; Kamachi, Takashi; Yoshizawa, Kazunari; Sekizawa, Oki; Uruga, Tomoya; Tada, Mizuki

    2015-10-14

    Decarbonylation-promoted Ru nanoparticle formation from Ru3(CO)12 on a basic K-doped Al2O3 surface was investigated by in situ FT-IR and in situ XAFS. Supported Ru3(CO)12 clusters on K-doped Al2O3 were converted stepwise to Ru nanoparticles, which catalyzed the selective hydrogenation of nitriles to the corresponding primary amines via initial decarbonylation, the nucleation of the Ru cluster core, and the growth of metallic Ru nanoparticles on the surface. As a result, small Ru nanoparticles, with an average diameter of less than 2 nm, were formed on the support and acted as efficient catalysts for nitrile hydrogenation at 343 K under hydrogen at atmospheric pressure. The structure and catalytic performance of Ru catalysts depended strongly on the type of oxide support, and the K-doped Al2O3 support acted as a good oxide for the selective nitrile hydrogenation without basic additives like ammonia. The activation of nitriles on the modelled Ru catalyst was also investigated by DFT calculations, and the adsorption structure of a nitrene-like intermediate, which was favourable for high primary amine selectivity, was the most stable structure on Ru compared with other intermediate structures.

  18. Allan Variance Calculation for Nonuniformly Spaced Input Data

    DTIC Science & Technology

    2015-01-01

    τ (tau). First, the set of gyro values is partitioned into bins of duration τ. For example, if the sampling duration τ is 2 sec and there are 4,000...Variance Calculation For each value of τ, the conventional AV calculation partitions the gyro data sets into bins with approximately τ / Δt...value of Δt. Therefore, a new way must be found to partition the gyro data sets into bins. The basic concept behind the modified AV calculation is

  19. Beginning to Teach Chemistry: How Personal and Academic Characteristics of Pre-Service Science Teachers Compare with Their Understandings of Basic Chemical Ideas

    ERIC Educational Resources Information Center

    Kind, Vanessa; Kind, Per Morten

    2011-01-01

    Around 150 pre-service science teachers (PSTs) participated in a study comparing academic and personal characteristics with their misconceptions about basic chemical ideas taught to 11-16-year-olds, such as particle theory, change of state, conservation of mass, chemical bonding, mole calculations, and combustion reactions. Data, collected by…

  20. Do Different Types of School Mathematics Development Depend on Different Constellations of Numerical versus General Cognitive Abilities?

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher

    2010-01-01

    The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word…

  1. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport

    NASA Astrophysics Data System (ADS)

    Tautz, R. C.

    2016-05-01

    A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.

  2. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  3. LiGRO: a graphical user interface for protein-ligand molecular dynamics.

    PubMed

    Kagami, Luciano Porto; das Neves, Gustavo Machado; da Silva, Alan Wilter Sousa; Caceres, Rafael Andrade; Kawano, Daniel Fábio; Eifler-Lima, Vera Lucia

    2017-10-04

    To speed up the drug-discovery process, molecular dynamics (MD) calculations performed in GROMACS can be coupled to docking simulations for the post-screening analyses of large compound libraries. This requires generating the topology of the ligands in different software, some basic knowledge of Linux command lines, and a certain familiarity in handling the output files. LiGRO-the python-based graphical interface introduced here-was designed to overcome these protein-ligand parameterization challenges by allowing the graphical (non command line-based) control of GROMACS (MD and analysis), ACPYPE (ligand topology builder) and PLIP (protein-binder interactions monitor)-programs that can be used together to fully perform and analyze the outputs of complex MD simulations (including energy minimization and NVT/NPT equilibration). By allowing the calculation of linear interaction energies in a simple and quick fashion, LiGRO can be used in the drug-discovery pipeline to select compounds with a better protein-binding interaction profile. The design of LiGRO allows researchers to freely download and modify the software, with the source code being available under the terms of a GPLv3 license from http://www.ufrgs.br/lasomfarmacia/ligro/ .

  4. H2O-CH4 and H2S-CH4 complexes: a direct comparison through molecular beam experiments and ab initio calculations.

    PubMed

    Cappelletti, David; Bartocci, Alessio; Frati, Federica; Roncaratti, Luiz F; Belpassi, Leonardo; Tarantelli, Francesco; Lakshmi, Prabha Aiswarya; Arunan, Elangannan; Pirani, Fernando

    2015-11-11

    New molecular beam scattering experiments have been performed to measure the total (elastic plus inelastic) cross sections as a function of the velocity in collisions between water and hydrogen sulfide projectile molecules and the methane target. Measured data have been exploited to characterize the range and strength of the intermolecular interaction in such systems, which are of relevance as they drive the gas phase molecular dynamics and the clathrate formation. Complementary information has been obtained by rotational spectra, recorded for the hydrogen sulfide-methane complex, with a pulsed nozzle Fourier transform microwave spectrometer. Extensive ab initio calculations have been performed to rationalize all the experimental findings. The combination of experimental and theoretical information has established the ground for the understanding of the nature of the interaction and allows for its basic components to be modelled, including charge transfer, in these weakly bound systems. The intermolecular potential for H2S-CH4 is significantly less anisotropic than for H2O-CH4, although both of them have potential minima that can be characterized as 'hydrogen bonded'.

  5. Quantifying Dimer and Trimer Formation by Tri- n -butyl Phosphates in n -Dodecane: Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vo, Quynh N.; Dang, Liem X.; Nilsson, Mikael

    2016-07-21

    Tri-n-butyl phosphate (TBP), a representative of neutral organophosphorous ligands, is an important extractant used in solvent extraction process for the recovery of uranium and plutonium from spent nuclear fuel. Microscopic pictures of TBP isomerism and its behavior in n-dodecane diluent were investigated utilizing MD simulations with previously optimized force field parameters for TBP and n-dodecane. Potential Mean Force (PMF) calculations on a single TBP molecule show seven probable TBP isomers. Radial Distribution Functions (RDF) of TBP suggests the existence of TBP trimers at high TBP concentrations in addition to dimers. 2D PMF calculations were performed to determine the angle andmore » distance criteria for TBP trimers. The dimerization and trimerization constants of TBP in n-dodecane were obtained and match our own experimental values using FTIR technique. The new insights into the conformational behaviors of TBP molecule as a monomer and as part of an aggregate could greatly aid the understanding of the complexation between TBP and metal ions in solvent extraction system. The U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences funded the work performed by LXD.« less

  6. Mechanism of oxygen electroreduction on gold surfaces in basic media.

    PubMed

    Kim, Jongwon; Gewirth, Andrew A

    2006-02-16

    The mechanism of the electroreduction of oxygen on Au surfaces in basic media is examined using surface-enhanced Raman scattering (SERS) measurements and density functional theory (DFT) calculations. The spectroscopy reveals superoxide species as a reduction intermediate throughout the oxygen electroreduction, while no peroxide is detected. The spectroscopy also shows the presence of superoxide after the addition of hydrogen peroxide. The calculations show no effect of OH addition to the Au(100) surface with regard to O-O length. These results suggest that the four-electron reduction of O(2) on Au(100) in base arises from a disproportionation mechanism which is enhanced on Au(100) relative to the other two low Miller index faces of Au.

  7. Electronics Technology. Performance Objectives. Basic Course.

    ERIC Educational Resources Information Center

    Campbell, Guy

    Several intermediate performance objectives and corresponding criterion measures are listed for each of 20 terminal objectives for a basic electronics technology course. The materials were developed for a two-semester course (2 hours daily) designed to include instruction in basic electricity and electronic fundamentals, and to develop skills and…

  8. XOP: a multiplatform graphical user interface for synchrotron radiation spectral and optics calculations

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Dejus, Roger J.

    1997-11-01

    XOP (X-ray OPtics utilities) is a graphical user interface (GUI) created to execute several computer programs that calculate the basic information needed by a synchrotron beamline scientist (designer or experimentalist). Typical examples of such calculations are: insertion device (undulator or wiggler) spectral and angular distributions, mirror and multilayer reflectivities, and crystal diffraction profiles. All programs are provided to the user under a unified GUI, which greatly simplifies their execution. The XOP optics applications (especially mirror calculations) take their basic input (optical constants, compound and mixture tables) from a flexible file-oriented database, which allows the user to select data from a large number of choices and also to customize their own data sets. XOP includes many mathematical and visualization capabilities. It also permits the combination of reflectivities from several mirrors and filters, and their effect, onto a source spectrum. This feature is very useful when calculating thermal load on a series of optical elements. The XOP interface is written in the IDL (Interactive Data Language). An embedded version of XOP, which freely runs under most Unix platforms (HP, Sun, Dec, Linux, etc) and under Windows95 and NT, is available upon request.

  9. Influence of Basicity on High-Chromium Vanadium-Titanium Magnetite Sinter Properties, Productivity, and Mineralogy

    NASA Astrophysics Data System (ADS)

    Zhou, Mi; Yang, Songtao; Jiang, Tao; Xue, Xiangxin

    2015-05-01

    The effect of basicity on high-chromium vanadium-titanium magnetite (V-Ti-Cr) sintering was studied via sintering pot tests. The sinter rate, yield, and productivity were calculated before determining sinter strength (TI) and reduction degradation index (RDI). Furthermore, the effect of basicity on V-Ti-Cr sinter mineralogy was clarified using metallographic microscopy, x-ray diffraction, and scanning electron microscopy-energy-dispersive x-ray spectroscopy. The results indicate that increasing basicity quickly increases the sintering rate from 25.4 mm min-1 to 28.9 mm min-1, yield from 75.3% to 87.2%, TI from 55.4% to 64.8%, and productivity from 1.83 t (m2 h)-1 to 1.94 t (m2 h)-1 before experiencing a slight drop. The V-Ti-Cr sinter shows complex mineral composition, with main mineral phases such as magnetite, hematite, silicate (dicalcium silicate, Ca-Fe olivine, glass), calcium and aluminum silico-ferrite (SFCA/SFCAI) and perovskite. Perovskite is notable because it lowers the V-Ti sinter strength and RDI. The well intergrowths between magnetite and SFCA/SFCAI, and the decrease in perovskite and secondary skeletal hematite are the key for improving TI and RDI. Finally, a comprehensive index was calculated, and the optimal V-Ti-Cr sinter basicity also for industrial application was 2.55.

  10. The Effect of TiO2 on the Liquidus Zone and Apparent Viscosity of SiO2-CaO-8wt.%MgO-14wt.%Al2O3 System

    NASA Astrophysics Data System (ADS)

    Yan, Zhiming; Lv, Xuewei; Zhang, Jie; Xu, Jian

    TiO2 has been approved as a viscosity-decreasing agent in blast furnace slag under inert atmosphere both by experimental and structure calculation. However, the validity of the above conclusion in a much bigger zone in CaO-SiO2-Al2O3-MgO phase diagram has not approved. The viscosity of slag dependent on the TiO2 content and basicity were measured in the present work. It was found that the viscosity and viscous activation energy decrease with increasing TiO2 content and basicity at a reasonable range, indicating TiO2 behaved as a viscosity-decreasing agent by depolymerizing the silicate network structure when its less than 50wt. %. The liquidity of the slag can be improved when TiO2 content less than 50wt. % and basicity from 0.5 to 1.1. The free running temperature increase at TiO2 content from 10wt.% to 30wt. %. The results of calculation does not agree well with the experimental values at a high basicity of 1.3 with TiO2 content from 20wt.% to 30wt.% and the lower basicity of 0.5 with TiO2 content more than 50wt.%.

  11. Number needed to treat (NNT) in clinical literature: an appraisal.

    PubMed

    Mendes, Diogo; Alves, Carlos; Batel-Marques, Francisco

    2017-06-01

    The number needed to treat (NNT) is an absolute effect measure that has been used to assess beneficial and harmful effects of medical interventions. Several methods can be used to calculate NNTs, and they should be applied depending on the different study characteristics, such as the design and type of variable used to measure outcomes. Whether or not the most recommended methods have been applied to calculate NNTs in studies published in the medical literature is yet to be determined. The aim of this study is to assess whether the methods used to calculate NNTs in studies published in medical journals are in line with basic methodological recommendations. The top 25 high-impact factor journals in the "General and/or Internal Medicine" category were screened to identify studies assessing pharmacological interventions and reporting NNTs. Studies were categorized according to their design and the type of variables. NNTs were assessed for completeness (baseline risk, time horizon, and confidence intervals [CIs]). The methods used for calculating NNTs in selected studies were compared to basic methodological recommendations published in the literature. Data were analyzed using descriptive statistics. The search returned 138 citations, of which 51 were selected. Most were meta-analyses (n = 23, 45.1%), followed by clinical trials (n = 17, 33.3%), cohort (n = 9, 17.6%), and case-control studies (n = 2, 3.9%). Binary variables were more common (n = 41, 80.4%) than time-to-event (n = 10, 19.6%) outcomes. Twenty-six studies (51.0%) reported only NNT to benefit (NNTB), 14 (27.5%) reported both NNTB and NNT to harm (NNTH), and 11 (21.6%) reported only NNTH. Baseline risk (n = 37, 72.5%), time horizon (n = 38, 74.5%), and CI (n = 32, 62.7%) for NNTs were not always reported. Basic methodological recommendations to calculate NNTs were not followed in 15 studies (29.4%). The proportion of studies applying non-recommended methods was particularly high for meta-analyses (n = 13, 56.5%). A considerable proportion of studies, particularly meta-analyses, applied methods that are not in line with basic methodological recommendations. Despite their usefulness in assisting clinical decisions, NNTs are uninterpretable if incompletely reported, and they may be misleading if calculating methods are inadequate to study designs and variables under evaluation. Further research is needed to confirm the present findings.

  12. Health Literacy Basics

    MedlinePlus

    ... skills. For example, calculating cholesterol and blood sugar levels, measuring medications, and understanding nutrition labels all require math skills. Choosing between health plans or comparing prescription ...

  13. Auto Mechanics. Performance Objectives. Basic Course.

    ERIC Educational Resources Information Center

    Carter, Thomas G., Sr.

    Several intermediate performance objectives and corresponding criterion measures are listed for each of 14 terminal objectives for a basic automotive mechanics course. The materials were developed for a two-semester course (2 hours daily) designed to provide training in the basic fundamentals in diagnosis and repair including cooling system and…

  14. Industrial Electronics. Performance Objectives. Basic Course.

    ERIC Educational Resources Information Center

    Tiffany, Earl

    Several intermediate performance objectives and corresponding criterion measures are listed for each of 30 terminal objectives for a two-semester (2 hours daily) high school course in basic industrial electronics. The objectives cover instruction in basic electricity including AC-DC theory, magnetism, electrical safety, care and use of hand tools,…

  15. Everything You Always Wanted to Know About the Mathematics of Sex and Family Planning...But Were Afraid to Calculate

    ERIC Educational Resources Information Center

    Meyer, Rochelle Wilson

    1978-01-01

    The author uses mathematical models that involve only algebra and a few basic ideas in discrete probability to describe the frequency of conception in large human societies. A number of calculations which can be done by students as exercises are given. (MN)

  16. Analyzing Problem's Difficulty Based on Neural Networks and Knowledge Map

    ERIC Educational Resources Information Center

    Kuo, Rita; Lien, Wei-Peng; Chang, Maiga; Heh, Jia-Sheng

    2004-01-01

    This paper proposes a methodology to calculate both the difficulty of the basic problems and the difficulty of solving a problem. The method to calculate the difficulty of problem is according to the process of constructing a problem, including Concept Selection, Unknown Designation, and Proposition Construction. Some necessary measures observed…

  17. Mathematical calculation skills required for drug administration in undergraduate nursing students to ensure patient safety: A descriptive study: Drug calculation skills in nursing students.

    PubMed

    Bagnasco, Annamaria; Galaverna, Lucia; Aleo, Giuseppe; Grugnetti, Anna Maria; Rosa, Francesca; Sasso, Loredana

    2016-01-01

    In the literature we found many studies that confirmed our concerns about nursing students' poor maths skills that directly impact on their ability to correctly calculate drug dosages with very serious consequences for patient safety. The aim of our study was to explore where students had most difficulty and identify appropriate educational interventions to bridge their mathematical knowledge gaps. This was a quali-quantitative descriptive study that included a sample of 726 undergraduate nursing students. We identified exactly where students had most difficulty and identified appropriate educational interventions to bridge their mathematical knowledge gaps. We found that the undergraduate nursing students mainly had difficulty with basic maths principles. Specific learning interventions are needed to improve their basic maths skills and their dosage calculation skills. For this purpose, we identified safeMedicate and eDose (Authentic World Ltd.), only that they are only available in English. In the near future we hope to set up a partnership to work together on the Italian version of these tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Generalizable items and modular structure for computerised physician staffing calculation on intensive care units

    PubMed Central

    Weiss, Manfred; Marx, Gernot; Iber, Thomas

    2017-01-01

    Intensive care medicine remains one of the most cost-driving areas within hospitals with high personnel costs. Under the scope of limited budgets and reimbursement, realistic needs are essential to justify personnel staffing. Unfortunately, all existing staffing models are top-down calculations with a high variability in results. We present a workload-oriented model, integrating quality of care, efficiency of processes, legal, educational, controlling, local, organisational and economic aspects. In our model, the physician’s workload solely related to the intensive care unit depends on three tasks: Patient-oriented tasks, divided in basic tasks (performed in every patient) and additional tasks (necessary in patients with specific diagnostic and therapeutic requirements depending on their specific illness, only), and non patient-oriented tasks. All three tasks have to be taken into account for calculating the required number of physicians. The calculation tool further allows to determine minimal personnel staffing, distribution of calculated personnel demand regarding type of employee due to working hours per year, shift work or standby duty. This model was introduced and described first by the German Board of Anesthesiologists and the German Society of Anesthesiology and Intensive Care Medicine in 2008 and since has been implemented and updated 2012 in Germany. The modular, flexible nature of the Excel-based calculation tool should allow adaption to the respective legal and organizational demands of different countries. After 8 years of experience with this calculation, we report the generalizable key aspects which may help physicians all around the world to justify realistic workload-oriented personnel staffing needs. PMID:28828300

  19. Generalizable items and modular structure for computerised physician staffing calculation on intensive care units.

    PubMed

    Weiss, Manfred; Marx, Gernot; Iber, Thomas

    2017-08-04

    Intensive care medicine remains one of the most cost-driving areas within hospitals with high personnel costs. Under the scope of limited budgets and reimbursement, realistic needs are essential to justify personnel staffing. Unfortunately, all existing staffing models are top-down calculations with a high variability in results. We present a workload-oriented model, integrating quality of care, efficiency of processes, legal, educational, controlling, local, organisational and economic aspects. In our model, the physician's workload solely related to the intensive care unit depends on three tasks: Patient-oriented tasks, divided in basic tasks (performed in every patient) and additional tasks (necessary in patients with specific diagnostic and therapeutic requirements depending on their specific illness, only), and non patient-oriented tasks. All three tasks have to be taken into account for calculating the required number of physicians. The calculation tool further allows to determine minimal personnel staffing, distribution of calculated personnel demand regarding type of employee due to working hours per year, shift work or standby duty. This model was introduced and described first by the German Board of Anesthesiologists and the German Society of Anesthesiology and Intensive Care Medicine in 2008 and since has been implemented and updated 2012 in Germany. The modular, flexible nature of the Excel-based calculation tool should allow adaption to the respective legal and organizational demands of different countries. After 8 years of experience with this calculation, we report the generalizable key aspects which may help physicians all around the world to justify realistic workload-oriented personnel staffing needs.

  20. Earth's external magnetic fields at low orbital altitudes

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.

    1990-01-01

    Under our Jun. 1987 proposal, Magnetic Signatures of Near-Earth Distributed Currents, we proposed to render operational a modeling procedure that had been previously developed to compute the magnetic effects of distributed currents flowing in the magnetosphere-ionosphere system. After adaptation of the software to our computing environment we would apply the model to low altitude satellite orbits and would utilize the MAGSAT data suite to guide the analysis. During the first year, basic computer codes to run model systems of Birkeland and ionospheric currents and several graphical output routines were made operational on a VAX 780 in our research facility. Software performance was evaluated using an input matchstick ionospheric current array, field aligned currents were calculated and magnetic perturbations along hypothetical satellite orbits were calculated. The basic operation of the model was verified. Software routines to analyze and display MAGSAT satellite data in terms of deviations with respect to the earth's internal field were also made operational during the first year effort. The complete set of MAGSAT data to be used for evaluation of the models was received at the end of the first year. A detailed annual report in May 1989 described these first year activities completely. That first annual report is included by reference in this final report. This document summarizes our additional activities during the second year of effort and describes the modeling software, its operation, and includes as an attachment the deliverable computer software specified under the contract.

  1. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  2. An empirically derived figure of merit for the quality of overall task performance

    NASA Technical Reports Server (NTRS)

    Lemay, Moira

    1989-01-01

    The need to develop an operationally relevant figure of merit for the quality of performance of a complex system such as an aircraft cockpit stems from a hypothesized dissociation between measures of performance and those of workload. Performance can be measured in terms of time, errors, or a combination of these. In most tasks performed by expert operators, errors are relatively rare and often corrected in time to avoid consequences. Moreover, perfect performance is seldom necessary to accomplish a particular task. Moreover, how well an expert performs a complex task consisting of a series of discrete cognitive tasks superimposed on a continuous task, such as flying an aircraft, does not depend on how well each discrete task is performed, but on their smooth sequencing. This makes the amount of time spent on each subtask of paramount importance in measuring overall performance, since smooth sequencing requires a minimum amount of time spent on each task. Quality consists in getting tasks done within a crucial time interval while maintaining acceptable continuous task performance. Thus, a figure of merit for overall quality of performance should be primarily a measure of time to perform discrete subtasks combined with a measure of basic vehicle control. Thus, the proposed figure of merit requires doing a task analysis on a series of performance, or runs, of a particular task, listing each discrete task and its associated time, and calculating the mean and standard deviation of these times, along with the mean and standard deviation of tracking error for the whole task. A set of simulator data on 30 runs of a landing task was obtained and a figure of merit will be calculated for each run. The figure of merit will be compared for voice and data link, so that the impact of this technology on total crew performance (not just communication performance) can be assessed. The effect of data link communication on other cockpit tasks will also be considered.

  3. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    ERIC Educational Resources Information Center

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic…

  4. Two-parameter partially correlated ground-state electron density of some light spherical atoms from Hartree-Fock theory with nonintegral nuclear charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cordero, Nicolas A.; March, Norman H.; Alonso, Julio A.

    2007-05-15

    Partially correlated ground-state electron densities for some spherical light atoms are calculated, into which nonrelativistic ionization potentials represent essential input data. The nuclear cusp condition of Kato is satisfied precisely. The basic theoretical starting point, however, is Hartree-Fock (HF) theory for the N electrons under consideration but with nonintegral nuclear charge Z{sup '} slightly different from the atomic number Z (=N). This HF density is scaled with a parameter {lambda}, near to unity, to preserve normalization. Finally, some tests are performed on the densities for the atoms Ne and Ar, as well as for Be and Mg.

  5. Surface plasmon resonances in liquid metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Ershov, A. E.; Gerasimov, V. S.; Gavrilyuk, A. P.; Karpov, S. V.

    2017-06-01

    We have shown significant suppression of resonant properties of metallic nanoparticles at the surface plasmon frequency during the phase transition "solid-liquid" in the basic materials of nanoplasmonics (Ag, Au). Using experimental values of the optical constants of liquid and solid metals, we have calculated nanoparticle plasmonic absorption spectra. The effect was demonstrated for single particles, dimers and trimers, as well as for the large multiparticle colloidal aggregates. Experimental verification was performed for single Au nanoparticles heated to the melting temperature and above up to full suppression of the surface plasmon resonance. It is emphasized that this effect may underlie the nonlinear optical response of composite materials containing plasmonic nanoparticles and their aggregates.

  6. A Gas Lubricant Combined Support-sealing Node

    NASA Astrophysics Data System (ADS)

    Falaleev, S. V.; Nadjari, H.; Vinogradov, A. S.

    2018-01-01

    The purpose of the research provided in this article is to develop a gas-dynamic device capable of performing the functions of support sealing, unloading devices for axial thrust bearings and damping of axial vibrations of the rotor. Some kinds of seals applied in supports of aircraft engines are known. A face gas-dynamic seal is one of the most effective and standard technology solution for compressors. As the basic element of the developed device, a face gas-dynamic seal with spiral grooves is considered. It also includes the fundamental mathematical calculation of such devices and the experimental research outcomes that form the basis of which such devices can be produced and adapted for use.

  7. The method of the gas-dynamic centrifugal compressor stage characteristics recalculation for variable rotor rotational speeds and the rotation angle of inlet guide vanes blades if the kinematic and dynamic similitude conditions are not met

    NASA Astrophysics Data System (ADS)

    Vanyashov, A. D.; Karabanova, V. V.

    2017-08-01

    A mathematical description of the method for obtaining gas-dynamic characteristics of a centrifugal compressor stage is proposed, taking into account the control action by varying the rotor speed and the angle of rotation of the guide vanes relative to the "basic" characteristic, if the kinematic and dynamic similitude conditions are not met. The formulas of the correction terms for the non-dimensional coefficients of specific work, consumption and efficiency are obtained. A comparative analysis of the calculated gas-dynamic characteristics of a high-pressure centrifugal stage with experimental data is performed.

  8. Measuring the RC time constant with Arduino

    NASA Astrophysics Data System (ADS)

    Pereira, N. S. A.

    2016-11-01

    In this work we use the Arduino UNO R3 open source hardware platform to assemble an experimental apparatus for the measurement of the time constant of an RC circuit. With adequate programming, the Arduino is used as a signal generator, a data acquisition system and a basic signal visualisation tool. Theoretical calculations are compared with direct observations from an analogue oscilloscope. Data processing and curve fitting is performed on a spreadsheet. The results obtained for the six RC test circuits are within the expected interval of values defined by the tolerance of the components. The hardware and software prove to be adequate to the proposed measurements and therefore adaptable to a laboratorial teaching and learning context.

  9. [Cost recovery for the treatment of retinal and vitreal diseases by pars plana vitrectomy under the German DRG system].

    PubMed

    Framme, C; Franz, D; Mrosek, S; Helbig, H

    2007-10-01

    Since 2004 inpatient health care in Germany is paid according to calculated DRGs. Only a few university hospitals participated in distinct cost calculations of clinical treatment. It was the aim of this study to check the cost recovery at a University Eye Hospital for the surgical treatment of retinal and vitreal diseases by pars plana vitrectomy (ppV), which are included in DRGs C03Z and C17Z. The performance data for both DRGs were collected for the years 2005 and 2006 using the E1 sheets according to section 21 KHEntG. The mean duration of all procedures was collected by data from the internal controlling. Costs for single operations were calculated from fixed and variable costs for the operation theatre and the ward including costs for personnel and material. In the 2-year period of 4,721 inpatient procedures 1,307 ppVs were performed. Each ppV had fixed surgical costs of 130.60 EUR; personnel costs varied between 575 EUR (C03Z; including cataract surgery; mean OP duration: 85 min) and 510 EUR (C17Z; no cataract surgery; mean OP duration: 73 min) at a proportion between general anaesthesia and local anaesthesia of 80/20. For a pure ppV material costs were 255 EUR. Additional adjuncts such as an encircling band, perfluorcarbon, ICG, tPA, gas and silicon oil or cataract surgery led to extra costs between 51 EUR and 250 EUR per adjunct und were used in 56% (C03Z) and 74.5% (C17Z) of all procedures. Costs for hospitalisation were about 1765 EUR at a mean residence time of 6.5 days. Thus, the overall costs of a pure basic ppV amounted to 2975 EUR (C03Z) and 2661 EUR (C17Z). In consideration of the current relative DRG weights of 1.08 and 0.957 and a current base rate of 2787.19 EUR in Bavaria, cost recovery is only given for basic ppV but not for complex ppVs having higher material and personnel costs. Additionally, the costs for multiple surgeries as occur in 5.9% of cases are not compensated by the DRG system. The reimbursement for inpatient ppVs in a University environment is not covered for complex procedures requiring more cost-effective material and personnel time. To consider an adequate cost recovery for these procedures a DRG split for both DRGs (C03Z and C17Z) in basic ppVs and complex ppVs is required. We recommend this proposal for the InEK.

  10. On the Basicity of 8-Phenylsulfanyl Quipazine Derivatives: New Potential Serotonergic Agents.

    PubMed

    Pieńko, T; Taciak, P P; Mazurek, A P

    2015-07-09

    A protonation state of serotonergic ligands plays a crucial role in their pharmacological activity. In this research, the basicity of 8-phenylsulfanyl quipazine derivatives as new potential serotonergic agents was studied. The most favorable protonation sites were determined in the gas and aqueous phases. In water, a solvation effect promoting the protonation of the N3 atom overcomes a positive charge delocalization phenomenon favoring a N1 atom protonation. The most stable conformations of neutral and protonated molecules in gas and water were found. It was demonstrated that a diprotonation reaction may occur. The most favorable among the diprotonated structures is the molecule with the N1 and N3 atoms protonated. A calculation of the pKa and pKa2 in water of a set of monosubstituted 8-phenylsulfanyl quipazine derivatives was performed using B3LYP/6-31G(d) and the SMD continuum solvation model. Enthalpic and entropic contributions to the pKa and pKa2 in gas and water were separated for a rationalization of a substituent effect on values of the pKa and pKa2. The relationship of the proton affinity and the solvation enthalpy in water with some reactivity descriptors, such as the Fukui function, the molecular electrostatic potential (MEP), and the global softness, was investigated. The order of the pKa values is the most controlled by the entropy. The diprotonation reaction, despite having an unfavorable enthalpy in water, is driven entropically. Final state effects in the diprotonated species were analyzed with the triadic formula. Results of a calculation of the theoretical basicity of the 8-phenylsulfanyl quipazines indicate that they should be monoprotonated on the N3 atom in the CNS environment. Diprotonation of the studied compounds may occur in very acidic body fluids such as the gastric juice.

  11. LaCoO3 (LCO) - Dramatic changes in Magnetic Moment in fields to 500T

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Harmon, B. N.

    LCO has attracted great attention over the years (>2000 publications) because of its unusual magnetic properties; although in its ground state at low temperatures it is non-magnetic. A recent experiment[1] in pulsed fields to 500T showed a moment of ~1.3μB above 140T, and above ~270T the magnetization rises, reaching ~3.8μB by 500T. We have performed first principles DFT calculations for LCO in high fields. Our earlier calculations[2] explained the importance of a small rhombohedral distortion in the ground state that leads to a suppression of the 1.3μB moment for fields below ~140T. By allowing fairly large atomic displacements in high fields, moments of ~4μB are predicted. This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division under Contract No. DE-AC02-07CH11358.

  12. Mathematical model of whole-process calculation for bottom-blowing copper smelting

    NASA Astrophysics Data System (ADS)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song

    2017-11-01

    The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.

  13. Practical Calculation of Second-order Supersonic Flow past Nonlifting Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Van Dyke, Milton D

    1952-01-01

    Calculation of second-order supersonic flow past bodies of revolution at zero angle of attack is described in detail, and reduced to routine computation. Use of an approximate tangency condition is shown to increase the accuracy for bodies with corners. Tables of basic functions and standard computing forms are presented. The procedure is summarized so that one can apply it without necessarily understanding the details of the theory. A sample calculation is given, and several examples are compared with solutions calculated by the method of characteristics.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Santanu; Dang, Liem X.

    In this paper, we present the first computer simulation of methanol exchange dynamics between the first and second solvation shells around different cations and anions. After water, methanol is the most frequently used solvent for ions. Methanol has different structural and dynamical properties than water, so its ion solvation process is different. To this end, we performed molecular dynamics simulations using polarizable potential models to describe methanol-methanol and ion-methanol interactions. In particular, we computed methanol exchange rates by employing the transition state theory, the Impey-Madden-McDonald method, the reactive flux approach, and the Grote-Hynes theory. We observed that methanol exchange occursmore » at a nanosecond time scale for Na+ and at a picosecond time scale for other ions. We also observed a trend in which, for like charges, the exchange rate is slower for smaller ions because they are more strongly bound to methanol. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less

  15. Medication calculation and administration workshop and hurdle assessment increases student awareness towards the importance of safe practices to decrease medication errors in the future.

    PubMed

    Wallace, Darlene; Woolley, Torres; Martin, David; Rasalam, Roy; Bellei, Maria

    2016-01-01

    Medication errors are the second most frequently reported hospital incident in Australia and are a global concern. A "Medication Calculation and Administration" workshop followed by a "hurdle" assessment (compulsory task mandating a minimum level of performance as a condition of passing the course) was introduced into Year 2 of the James Cook University medical curriculum to decrease dosage calculation and administration errors among graduates. This study evaluates the effectiveness of this educational activity as a long-term strategy to teach medical students' essential skills in calculating and administering medications. This longitudinal study used a pre- and post-test design to determine whether medical students retained their calculation and administration skills over a period of 4 years. The ability to apply basic mathematical skills to medication dose calculation, principles of safe administration (Part 1), and ability to access reference materials to check indications, contraindications, and writing the medication order with correct abbreviations (Part 2) were compared between Year 2 and 6 assessments. Scores for Parts 1, 2 and total scores were nearly identical from Year 2 to Year 6 (P = 0.663, 0.408, and 0.472, respectively), indicating minimal loss of knowledge by students in this period. Most Year 6 students (86%) were able to recall at least 5 of the "6 Rights of Medication Administration" while 84% reported accessing reference material and 91% reported checking their medical calculations. The "Medication Calculation and Administration" workshop with a combined formative and summative assessment - a "hurdle" - promotes long-term retention of essential clinical skills for medical students. These skills and an awareness of the problem are strategies to assist medical graduates in preventing future medication-related adverse events.

  16. Basic theory for polarized, astrophysical maser radiation in a magnetic field

    NASA Technical Reports Server (NTRS)

    Watson, William D.

    1994-01-01

    Fundamental alterations in the theory and resulting behavior of polarized, astrophysical maser radiation in the presence of a magnetic field have been asserted based on a calculation of instabilities in the radiative transfer. I reconsider the radiative transfer and find that the relevant instabilities do not occur. Calculational errors in the previous investigation are identified. In addition, such instabilities would have appeared -- but did not -- in the numerous numerical solutions to the same radiative transfer equations that have been presented in the literature. As a result, all modifications that have been presented in a recent series of papers (Elitzur 1991, 1993) to the theory for polarized maser radiation in the presence of a magnetic field are invalid. The basic theory is thus clarified.

  17. Experimental investigation of two-phase heat transfer in a porous matrix.

    NASA Technical Reports Server (NTRS)

    Von Reth, R.; Frost, W.

    1972-01-01

    One-dimensional two-phase flow transpiration cooling through porous metal is studied experimentally. The experimental data is compared with a previous one-dimensional analysis. Good agreement with calculated temperature distribution is obtained as long as the basic assumptions of the analytical model are satisfied. Deviations from the basic assumptions are caused by nonhomogeneous and oscillating flow conditions. Preliminary derivation of nondimensional parameters which characterize the stable and unstable flow conditions is given. Superheated liquid droplets observed sputtering from the heated surface indicated incomplete evaporation at heat fluxes well in access of the latent energy transport. A parameter is developed to account for the nonequilibrium thermodynamic effects. Measured and calculated pressure drops show contradicting trends which are attributed to capillary forces.

  18. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  19. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  20. Aerothermodynamics of expert ballistic vehicle at hypersonic speeds

    NASA Astrophysics Data System (ADS)

    Kharitonov, A. M.; Adamov, N. P.; Chirkashenko, V. F.; Mazhul, I. I.; Shpak, S. I.; Shiplyuk, A. N.; Vasenyov, L. G.; Zvegintsev, V. I.; Muylaert, J. M.

    2012-01-01

    The European EXPErimental Re-entry Test bed (EXPERT) vehicle is intended for studying various basic phenomena, such as the boundary-layer transition on blunted bodies, real gas effects during shock wave/boundary layer interaction, and effect of surface catalycity. Another task is to develop methods for recalculating the results of windtunnel experiments to flight conditions. The EXPERT program implies large-scale preflight research, in particular, various calculations with the use of advanced numerical methods, experimental studies of the models in various wind tunnels, and comparative analysis of data obtained for possible extrapolation of data to in-flight conditions. The experimental studies are performed in various aerodynamic centers of Europe and Russia under contracts with ESA-ESTEC. In particular, extensive experiments are performed at the Von Karman Institute for Fluid Dynamics (VKI, Belgium) and also at the DLR aerospace center in Germany. At ITAM SB RAS, the experimental studies of the EXPERT model characteristic were performed under ISTC Projects 2109, 3151, and 3550, in the T-313 supersonic wind tunnel and AT-303 hypersonic wind tunnel.

  1. Performance analysis for IEEE 802.11 distributed coordination function in radio-over-fiber-based distributed antenna systems.

    PubMed

    Fan, Yuting; Li, Jianqiang; Xu, Kun; Chen, Hao; Lu, Xun; Dai, Yitang; Yin, Feifei; Ji, Yuefeng; Lin, Jintong

    2013-09-09

    In this paper, we analyze the performance of IEEE 802.11 distributed coordination function in simulcast radio-over-fiber-based distributed antenna systems (RoF-DASs) where multiple remote antenna units (RAUs) are connected to one wireless local-area network (WLAN) access point (AP) with different-length fiber links. We also present an analytical model to evaluate the throughput of the systems in the presence of both the inter-RAU hidden-node problem and fiber-length difference effect. In the model, the unequal delay induced by different fiber length is involved both in the backoff stage and in the calculation of Ts and Tc, which are the period of time when the channel is sensed busy due to a successful transmission or a collision. The throughput performances of WLAN-RoF-DAS in both basic access and request to send/clear to send (RTS/CTS) exchange modes are evaluated with the help of the derived model.

  2. Optimality of the basic colour categories for classification

    PubMed Central

    Griffin, Lewis D

    2005-01-01

    Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219

  3. BDEN: A timesaving computer program for calculating soil bulk density and water content.

    Treesearch

    Lynn G. Starr; Michael J. Geist

    1983-01-01

    This paper presents an interactive computer program written in BASIC language that will calculate soil bulk density and moisture percentage by weight and volume. Coarse fragment weights are required. The program will also summarize the resulting data giving mean, standard deviation, and 95-percent confidence interval on one or more groupings of data.

  4. Commentary: Decaying Numerical Skills. "I Can't Divide by 60 in My Head!"

    ERIC Educational Resources Information Center

    Parslow, Graham R.

    2010-01-01

    As an undergraduate in the 1960s, the author mostly used a slide rule for calculations and a Marchant-brand motor-operated mechanical calculator for statistics. This was after an elementary education replete with learning multiplication tables and taking speed and accuracy tests in arithmetic. Times have changed and assuming even basic calculation…

  5. LEGO-Method--New Strategy for Chemistry Calculation

    ERIC Educational Resources Information Center

    Molnar, Jozsef; Molnar-Hamvas, Livia

    2011-01-01

    The presented strategy of chemistry calculation is based on mole-concept, but it uses only one fundamental relationship of the amounts of substance as a basic panel. The name of LEGO-method comes from the famous toy of LEGO[R] because solving equations by grouping formulas is similar to that. The relations of mole and the molar amounts, as small…

  6. 30 CFR 250.528 - What must I include in my casing pressure request?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculated MAWOPs; (h) All casing/riser pre-bleed down pressures; (i) Shut-in tubing pressure; (j) Flowing tubing pressure; (k) Date and the calculated daily production rate during last well test (oil, gas, basic...); (m) Well type (dry tree, hybrid, or subsea); (n) Date of diagnostic test; (o) Well schematic; (p...

  7. 30 CFR 250.527 - What must I include in my casing pressure request?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculated MAWOPs; (h) All casing/riser pre-bleed down pressures; (i) Shut-in tubing pressure; (j) Flowing tubing pressure; (k) Date and the calculated daily production rate during last well test (oil, gas, basic...); (m) Well type (dry tree, hybrid, or subsea); (n) Date of diagnostic test; (o) Well schematic; (p...

  8. 30 CFR 250.528 - What must I include in my casing pressure request?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculated MAWOPs; (h) All casing/riser pre-bleed down pressures; (i) Shut-in tubing pressure; (j) Flowing tubing pressure; (k) Date and the calculated daily production rate during last well test (oil, gas, basic...); (m) Well type (dry tree, hybrid, or subsea); (n) Date of diagnostic test; (o) Well schematic; (p...

  9. The Effect of Home Related Science Activities on Students' Performance in Basic Science

    ERIC Educational Resources Information Center

    Obomanu, B. J.; Akporehwe, J. N.

    2012-01-01

    Our study investigated the effect of utilizing home related science activities on student's performance in some basic science concepts. The concepts considered were heart energy, ecology and mixtures. The sample consisted of two hundred and forty (240) basic junior secondary two (BJSS11) students drawn from a population of five thousand and…

  10. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  11. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  12. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  13. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  14. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  15. Student Listening Gains in the Basic Communication Course: A Comparison of Self-Report and Performance-Based Competence Measures

    ERIC Educational Resources Information Center

    Johnson, Danette Ifert; Long, Kathleen M.

    2007-01-01

    Direct listening instruction is a frequent component of basic communication courses. Research has found changes in self-perceived listening competence during a basic communication course and only a minimal relationship between self-perceived and performance-based measures of listening and other communication behaviors. Results of the present study…

  16. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  17. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  18. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  19. Congenital heart surgery: surgical performance according to the Aristotle complexity score.

    PubMed

    Arenz, Claudia; Asfour, Boulos; Hraska, Viktor; Photiadis, Joachim; Haun, Christoph; Schindler, Ehrenfried; Sinzobahamvya, Nicodème

    2011-04-01

    Aristotle score methodology defines surgical performance as 'complexity score times hospital survival'. We analysed how this performance evolved over time and in correlation with case volume. Aristotle basic and comprehensive complexity scores and corresponding basic and comprehensive surgical performances were determined for primary (main) procedures carried out from 2006 to 2009. Surgical case volume performance described as unit performance was estimated as 'surgical performance times the number of primary procedures'. Basic and comprehensive complexity scores for the whole cohort of procedures (n=1828) were 7.74±2.66 and 9.89±3.91, respectively. With an early survival of 97.5% (1783/1828), mean basic and comprehensive surgical performances reached 7.54±2.54 and 9.64±3.81, respectively. Basic surgical performance varied little over the years: 7.46±2.48 in 2006, 7.43±2.58 in 2007, 7.50±2.76 in 2008 and 7.79±2.54 in 2009. Comprehensive surgical performance decreased from 9.56±3.91 (2006) to 9.22±3.94 (2007), and then to 9.13±3.77 (2008), thereafter increasing up to 10.62±3.67 (2009). No significant change of performance was observed for low comprehensive complexity levels 1-3. Variation concerned level 4 (p=0.048) which involved the majority of procedures (746, or 41% of cases) and level 6 (p<0.0001) which included a few cases (20, or 1%), whereas for level 5, statistical significance was almost attained: p=0.079. With a mean annual number of procedures of 457, mean basic and comprehensive unit performance was estimated at 3447±362 and 4405±577, respectively. Basic unit performance increased year to year from 3036 (2006, 100%) to 3254 (2007, 107.2%), then 3720 (2008, 122.5%), up to 3793 (2009, 124.9%). Comprehensive unit performance also increased: from 3891 (2006, 100%) to 4038 (2007, 103.8%), 4528 (2008, 116.4%) and 5172 (2009, 132.9%). Aristotle scoring of surgical performance allows quality assessment of surgical management of congenital heart disease over time. The newly defined unit performance appears to well reflect the trend of activity and efficiency of a congenital heart surgery department. Copyright © 2010 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  20. Effect of n-octanol in the mobile phase on lipophilicity determination by reversed-phase high-performance liquid chromatography on a modified silica column.

    PubMed

    Benhaim, Deborah; Grushka, Eli

    2008-10-31

    In this study, we show that the addition of n-octanol to the mobile phase improves the chromatographic determination of lipophilicity parameters of xenobiotics (neutral solutes, acidic, neutral and basic drugs) on a Phenomenex Gemini C18 column. The Gemini C18 column is a new generation hybrid silica-based column with an extended pH range capability. The wide pH range (2-12) afforded the examination of basic drugs and acidic drugs in their neutral form. Extrapolated retention factor values, [Formula: see text] , obtained on the above column with the n-octanol-modified mobile phase were very well correlated (1:1 correlation) with literature values of logP (logarithm of the partition coefficient in n-octanol/water) of neutral compounds and neutral drugs (69). In addition, we found good linear correlations between measured [Formula: see text] values and calculated values of the logarithm of the distribution coefficient at pH 7.0 (logD(7.0)) for ionized acidic and basic drugs (r(2)=0.95). The Gemini C18 phase was characterized using the linear solvation energy relationship (LSER) model of Abraham. The LSER system constants for the column were compared to the LSER constants of n-octanol/water extraction system using the Tanaka radar plots. The comparison shows that the two methods are nearly equivalent.

  1. Standards of nutrition for athletes in Germany.

    PubMed

    Diel, F; Khanferyan, R A

    2013-01-01

    The Deutscher Olympische Sportbund (DOSB) founded recently an advisory board for German elite athlete nutrition, the 'Arbeitsgruppe (AG) Ernahrungsberatung an den Olympiastutzpunkten'. The 'Performance codex and quality criteria for the food supply in facilities of German elite sports' have been established since 1997. The biochemical equivalent (ATP) for the energy demand is calculated using the DLW (Double Labeled Water)-method on the basis of RMR (Resting Metabolic Rate) and BMR (Basic Metabolic Rate) at sport type specific exercises and performances. Certain nutraceutical ingredients for dietary supplements can be recommended. However, quality criteria for nutrition, cooking and food supply are defined on the basis of Health Food and the individual physiological/social-psychological status of the athlete. Especially food supplements and instant food have to be avoided for young athletes. The German advisory board for elite athlete nutrition publishes 'colour lists' for highly recommended (green), acceptable (yellow), and less recommended (red) food stuff.

  2. Development of improved amorphous materials for laser systems

    NASA Technical Reports Server (NTRS)

    Neilson, G. F.; Weinberg, M. C.

    1974-01-01

    Crystallization calculations were performed in order to determine the possibility of forming a particular type of laser glass with the avoidance of devitrification in an outer space laboratory. It was demonstrated that under the homogenuous nucleating conditions obtainable in a zero gravity laboratory this laser glass may be easily quenched to a virtually crystal-free product. Experimental evidence is provided that use of this material as a host in a neodymium glass laser would result in more than a 10 percent increase in efficiency when compared to laser glass rods of a similar composition currently commercially available. Differential thermal analysis, thermal gradient oven, X-ray diffraction, and liquidus determination experiments were carried out to determine the basics of the crystallization behavior of the glass, and small-angle X-ray scattering and splat-cooling experiments were performed in order to provide additional evidence for the feasibility of producing this laser glass material, crystal free, in an outer space environment.

  3. Network-based simulation of aircraft at gates in airport terminals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.

    1998-03-01

    Simulation is becoming an essential tool for planning, design, and management of airport facilities. A simulation of aircraft at gates at an airport can be applied for various periodically performed applications, relating to the dynamic behavior of aircraft at gates in airport terminals for analyses, evaluations, and decision supports. Conventionally, such simulations are implemented using an event-driven method. For a more efficient simulation, this paper proposes a network-based method. The basic idea is to transform all the sequence constraint relations of aircraft at gates into a network. The simulation is done by calculating the longest path to all the nodesmore » in the network. The effect of the algorithm of the proposed method has been examined by experiments, and the superiority of the proposed method over the event-driven method is revealed through comprehensive comparisons of their overall simulation performance.« less

  4. Mathematical Modeling – The Impact of Cooling Water Temperature Upsurge on Combined Cycle Power Plant Performance and Operation

    NASA Astrophysics Data System (ADS)

    Indra Siswantara, Ahmad; Pujowidodo, Hariyotejo; Darius, Asyari; Ramdlan Gunadi, Gun Gun

    2018-03-01

    This paper presents the mathematical modeling analysis on cooling system in a combined cycle power plant. The objective of this study is to get the impact of cooling water upsurge on plant performance and operation, using Engineering Equation Solver (EES™) tools. Power plant installed with total power capacity of block#1 is 505.95 MWe and block#2 is 720.8 MWe, where sea water consumed as cooling media at two unit condensers. Basic principle of analysis is heat balance calculation from steam turbine and condenser, concern to vacuum condition and heat rate values. Based on the result shown graphically, there were impact the upsurge of cooling water to increase plant heat rate and vacuum pressure in condenser so ensued decreasing plant efficiency and causing possibility steam turbine trip as back pressure raised from condenser.

  5. A rapid method for assessing the environmental performance of commercial farms in the Pampas of Argentina.

    PubMed

    Viglizzo, E F; Frank, F; Bernardos, J; Buschiazzo, D E; Cabo, S

    2006-06-01

    The generation of reliable updated information is critical to support the harmonization of socio-economic and environmental issues in a context of sustainable development. The agro-environmental assessment and management of agricultural systems often relies on indicators that are necessary to make sound decisions. This work aims to provide an approach to (a) assess the environmental performance of commercial farms in the Pampas of Argentina, and (b) propose a methodological framework to calculate environmental indicators that can rapidly be applied to practical farming. 120 commercial farms scattered across the Pampas were analyzed in this study during 2002 and 2003. Eleven basic indicators were identified and calculation methods described. Such indicators were fossil energy (FE) use, FE use efficiency, nitrogen (N) balance, phosphorus (P) balance, N contamination risk, P contamination risk, pesticide contamination risk, soil erosion risk, habitat intervention, changes in soil carbon stock, and balance of greenhouse gases. A model named Agro-Eco-Index was developed on a Microsoft-Excel support to incorporate on-farm collected data and facilitate the calculation of indicators by users. Different procedures were applied to validate the model and present the results to the users. Regression models (based on linear and non-linear models) were used to validate the comparative performance of the study farms across the Pampas. An environmental dashboard was provided to represent in a graphical way the behavior of farms. The method provides a tool to discriminate environmentally friendly farms from those that do not pay enough attention to environmental issues. Our procedure might be useful for implementing an ecological certification system to reward a good environmental behavior in society (e.g., through tax benefits) and generate a commercial advantage (e.g., through the allocation of green labels) for committed farmers.

  6. Affect school and script analysis versus basic body awareness therapy in the treatment of psychological symptoms in patients with diabetes and high HbA1c concentrations: two study protocols for two randomized controlled trials.

    PubMed

    Melin, Eva O; Svensson, Ralph; Gustavsson, Sven-Åke; Winberg, Agneta; Denward-Olah, Ewa; Landin-Olsson, Mona; Thulesius, Hans O

    2016-04-27

    Depression is linked with alexithymia, anxiety, high HbA1c concentrations, disturbances of cortisol secretion, increased prevalence of diabetes complications and all-cause mortality. The psycho-educational method 'affect school with script analysis' and the mind-body therapy 'basic body awareness treatment' will be trialled in patients with diabetes, high HbA1c concentrations and psychological symptoms. The primary outcome measure is change in symptoms of depression. Secondary outcome measures are changes in HbA1c concentrations, midnight salivary cortisol concentration, symptoms of alexithymia, anxiety, self-image measures, use of antidepressants, incidence of diabetes complications and mortality. Two studies will be performed. Study I is an open-labeled parallel-group study with a two-arm randomized controlled trial design. Patients are randomized to either affect school with script analysis or to basic body awareness treatment. According to power calculations, 64 persons are required in each intervention arm at the last follow-up session. Patients with type 1 or type 2 diabetes were recruited from one hospital diabetes outpatient clinic in 2009. The trial will be completed in 2016. Study II is a multicentre open-labeled parallel-group three-arm randomized controlled trial. Patients will be randomized to affect school with script analysis, to basic body awareness treatment, or to treatment as usual. Power calculations show that 70 persons are required in each arm at the last follow-up session. Patients with type 2 diabetes will be recruited from primary care. This study will start in 2016 and finish in 2023. For both studies, the inclusion criteria are: HbA1c concentration ≥62.5 mmol/mol; depression, alexithymia, anxiety or a negative self-image; age 18-59 years; and diabetes duration ≥1 year. The exclusion criteria are pregnancy, severe comorbidities, cognitive deficiencies or inadequate Swedish. Depression, anxiety, alexithymia and self-image are assessed using self-report instruments. HbA1c concentration, midnight salivary cortisol concentration, blood pressure, serum lipid concentrations and anthropometrics are measured. Data are collected from computerized medical records and the Swedish national diabetes and causes of death registers. Whether the "affect school with script analysis" will reduce psychological symptoms, increase emotional awareness and improve diabetes related factors will be tried, and compared to "basic body awareness treatment" and treatment as usual. ClinicalTrials.gov: NCT01714986.

  7. Comparison of Gas Displacement based on Thermometry in the Pulse Tube with Rayleigh Scattering

    NASA Astrophysics Data System (ADS)

    Hagiwara, Yasumasa; Nara, Kenichi; Ito, Seitoku; Saito, Takamoto

    A pulse tube refrigerator has high reliability because of its simple structure. Recently the level of development activity of the pulse tube refrigerator has increased, but the quantitative understanding of the refrigeration mechanism has not fully been obtained. Therefore various explanations were proposed. The concept of virtual gas piston in particular helps us to understand the function of a phase shifter such as a buffer tank and an orifice because the virtual gas piston corresponds to a piston of a Stirling refrigerator. However it is difficult to directly measure the averaged gas displacement which corresponds to the virtual gas piston because uniform gas flow such as a gas piston does not always exist. For example, there are a jet flow from orifice and circulated flows in a pulse tube, which are predicted theoretically. In spite of these phenomena, the averaged gas displacement is very important in practical use because it can simply predict the performance from the displacement. In this report, we calculate the averaged gas displacement and mass flow through an orifice. The mass flow is calculated from the pressure change in a buffer tank. The averaged gas displacement is calculated from temperature profiles in the pulse tube and the mass flow. It is necessary to measure temperature in the pulse tube as widely as possible in order to calculate the averaged gas displacement. We apply a method using the Rayleigh Scattering the thermometry in the pulse tube. With this method, it is possible to perform 2-dimensional measurement without disturbing the gas flow. By this method, the averaged gas displacements and the temperature profiles of basic and orifice types of refrigeration were compared.

  8. S007--Preliminary Evaluation of the Pattern Cutting and the Ligating Loop Virtual Laparoscopic Trainers

    PubMed Central

    Chellali, A.; Ahn, W.; Sankaranarayanan, G.; Flinn, J. T.; Schwaitzberg, S. D.; Jones, D.B.; De, Suvranu; Cao, C.G.L.

    2014-01-01

    Introduction The Fundamentals of Laparoscopic Surgery (FLS) trainer is currently the standard for training and evaluating basic laparoscopic skills. However, its manual scoring system is time-consuming and subjective. The Virtual Basic Laparoscopic Skill Trainer (VBLaST©) is the virtual version of the FLS trainer which allows automatic and real time assessment of skill performance, as well as force feedback. In this study, the VBLaST© pattern cutting (VBLaST-PC©) and ligating loop (VBLaST-LL©) tasks were evaluated as part of a validation study. We hypothesized that performance would be similar on the FLS and VBLaST© trainers, and that subjects with more experience would perform better than those with less experience on both trainers. Methods Fifty-five subjects with varying surgical experience were recruited at the Learning Center during the 2013 SAGES annual meeting and were divided into two groups: experts (PGY 5, surgical fellows and surgical attendings) and novices (PGY 1–4). They were asked to perform the pattern cutting or the ligating loop task on the FLS and the VBLaST© trainers. Their performance scores for each trainer were calculated and compared. Results There were no significant differences between the FLS and VBLaST© scores for either the pattern cutting or the ligating loop task. Experts’ scores were significantly higher than the scores for novices on both trainers. Conclusion This study showed that the subjects’ performance on the VBLaST© trainer was similar to the FLS performance for both tasks. Both the VBLaST-PC© and the VBLaST-LL© tasks permitted discrimination between the novice and expert groups. Though concurrent and discriminant validity has been established, further studies to establish convergent and predictive validity are needed. Once validated as a training system for laparoscopic skills, the system is expected to overcome the current limitations of the FLS trainer. PMID:25159626

  9. Promoter classifier: software package for promoter database analysis.

    PubMed

    Gershenzon, Naum I; Ioshikhes, Ilya P

    2005-01-01

    Promoter Classifier is a package of seven stand-alone Windows-based C++ programs allowing the following basic manipulations with a set of promoter sequences: (i) calculation of positional distributions of nucleotides averaged over all promoters of the dataset; (ii) calculation of the averaged occurrence frequencies of the transcription factor binding sites and their combinations; (iii) division of the dataset into subsets of sequences containing or lacking certain promoter elements or combinations; (iv) extraction of the promoter subsets containing or lacking CpG islands around the transcription start site; and (v) calculation of spatial distributions of the promoter DNA stacking energy and bending stiffness. All programs have a user-friendly interface and provide the results in a convenient graphical form. The Promoter Classifier package is an effective tool for various basic manipulations with eukaryotic promoter sequences that usually are necessary for analysis of large promoter datasets. The program Promoter Divider is described in more detail as a representative component of the package.

  10. The influence of hydrogen bonding on partition coefficients

    NASA Astrophysics Data System (ADS)

    Borges, Nádia Melo; Kenny, Peter W.; Montanari, Carlos A.; Prokopczyk, Igor M.; Ribeiro, Jean F. R.; Rocha, Josmar R.; Sartori, Geraldo Rodrigues

    2017-02-01

    This Perspective explores how consideration of hydrogen bonding can be used to both predict and better understand partition coefficients. It is shown how polarity of both compounds and substructures can be estimated from measured alkane/water partition coefficients. When polarity is defined in this manner, hydrogen bond donors are typically less polar than hydrogen bond acceptors. Analysis of alkane/water partition coefficients in conjunction with molecular electrostatic potential calculations suggests that aromatic chloro substituents may be less lipophilic than is generally believed and that some of the effect of chloro-substitution stems from making the aromatic π-cloud less available to hydrogen bond donors. Relationships between polarity and calculated hydrogen bond basicity are derived for aromatic nitrogen and carbonyl oxygen. Aligned hydrogen bond acceptors appear to present special challenges for prediction of alkane/water partition coefficients and this may reflect `frustration' of solvation resulting from overlapping hydration spheres. It is also shown how calculated hydrogen bond basicity can be used to model the effect of aromatic aza-substitution on octanol/water partition coefficients.

  11. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    PubMed Central

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930

  12. On the impact of ICRU report 90 recommendations on kQ factors for high-energy photon beams.

    PubMed

    Mainegra-Hing, Ernesto; Muir, Bryan R

    2018-06-03

    To assess the impact of the ICRU report 90 recommendations on the beam-quality conversion factor, k Q , used for clinical reference dosimetry of megavoltage linac photon beams. The absorbed dose to water and the absorbed dose to the air in ionization chambers representative of those typically used for linac photon reference dosimetry are calculated at the reference depth in a water phantom using Monte Carlo simulations. Depth-dose calculations in water are also performed to investigate changes in beam quality specifiers. The calculations are performed in a cobalt-60 beam and MV photon beams with nominal energy between 6 MV and 25 MV using the EGSnrc simulation toolkit. Inputs to the calculations use stopping-power data for graphite and water from the original ICRU-37 report and the new proposed values from the recently published ICRU-90 report. Calculated k Q factors are compared using the two different recommendations for key dosimetry data and measured k Q factors. Less than about 0.1% effects from ICRU-90 recommendations on the beam quality specifiers, the photon component of the percentage depth-dose at 10 cm, %dd(10) x , and the tissue-phantom ratio at 20 cm and 10 cm, TPR1020, are observed. Although using different recommendations for key dosimetric data impact water-to-air stopping-power ratios and ion chamber perturbation corrections by up to 0.54% and 0.40%, respectively, we observe little difference (≤0.14%) in calculated k Q factors. This is contradictory to the predictions in ICRU-90 that suggest differences up to 0.5% in high-energy photon beams. A slightly better agreement with experimental values is obtained when using ICRU-90 recommendations. Users of the addendum to the TG-51 protocol for reference dosimetry of high-energy photon beams, which recommends Monte Carlo calculated k Q factors, can rest assured that the recommendations of ICRU report 90 on basic data have little impact on this central dosimetric parameter. © Her Majesty the Queen in Right of Canada 2018. Reproduced with the permission of the Minister of Science.

  13. Effect of Graphene with Nanopores on Metal Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Hu; Chen, Xianlang; Wang, Lei

    Porous graphene, which is a novel type of defective graphene, shows excellent potential as a support material for metal clusters. In this work, the stability and electronic structures of metal clusters (Pd, Ir, Rh) supported on pristine graphene and graphene with different sizes of nanopore were investigated by first-principle density functional theory (DFT) calculations. Thereafter, CO adsorption and oxidation reaction on the Pd-graphene system were chosen to evaluate its catalytic performance. Graphene with nanopore can strongly stabilize the metal clusters and cause a substantial downshift of the d-band center of the metal clusters, thus decreasing CO adsorption. All binding energies,more » d-band centers, and adsorption energies show a linear change with the size of the nanopore: a bigger size of nanopore corresponds to a stronger metal clusters bond to the graphene, lower downshift of the d-band center, and weaker CO adsorption. By using a suitable size nanopore, supported Pd clusters on the graphene will have similar CO and O2 adsorption ability, thus leading to superior CO tolerance. The DFT calculated reaction energy barriers show that graphene with nanopore is a superior catalyst for CO oxidation reaction. These properties can play an important role in instructing graphene-supported metal catalyst preparation to prevent the diffusion or agglomeration of metal clusters and enhance catalytic performance. This work was supported by National Basic Research Program of China (973Program) (2013CB733501), the National Natural Science Foundation of China (NSFC-21176221, 21136001, 21101137, 21306169, and 91334013). D. Mei acknowledges the support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC).« less

  14. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  15. Quantum Computational Calculations of the Ionization Energies of Acidic and Basic Amino Acids: Aspartate, Glutamate, Arginine, Lysine, and Histidine

    NASA Astrophysics Data System (ADS)

    de Guzman, C. P.; Andrianarijaona, M.; Lee, Y. S.; Andrianarijaona, V.

    An extensive knowledge of the ionization energies of amino acids can provide vital information on protein sequencing, structure, and function. Acidic and basic amino acids are unique because they have three ionizable groups: the C-terminus, the N-terminus, and the side chain. The effects of multiple ionizable groups can be seen in how Aspartate's ionizable side chain heavily influences its preferred conformation (J Phys Chem A. 2011 April 7; 115(13): 2900-2912). Theoretical and experimental data on the ionization energies of many of these molecules is sparse. Considering each atom of the amino acid as a potential departing site for the electron gives insight on how the three ionizable groups affect the ionization process of the molecule and the dynamic coupling between the vibrational modes. In the following study, we optimized the structure of each acidic and basic amino acid then exported the three dimensional coordinates of the amino acids. We used ORCA to calculate single point energies for a region near the optimized coordinates and systematically went through the x, y, and z coordinates of each atom in the neutral and ionized forms of the amino acid. With the calculations, we were able to graph energy potential curves to better understand the quantum dynamic properties of the amino acids. The authors thank Pacific Union College Student Association for providing funds.

  16. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  17. Dynamic Eigenvalue Problem of Concrete Slab Road Surface

    NASA Astrophysics Data System (ADS)

    Pawlak, Urszula; Szczecina, Michał

    2017-10-01

    The paper presents an analysis of the dynamic eigenvalue problem of concrete slab road surface. A sample concrete slab was modelled using Autodesk Robot Structural Analysis software and calculated with Finite Element Method. The slab was set on a one-parameter elastic subsoil, for which the modulus of elasticity was separately calculated. The eigen frequencies and eigenvectors (as maximal vertical nodal displacements) were presented. On the basis of the results of calculations, some basic recommendations for designers of concrete road surfaces were offered.

  18. Software-Based Visual Loan Calculator For Banking Industry

    NASA Astrophysics Data System (ADS)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  19. Basic practical skills teaching and learning in undergraduate medical education - a review on methodological evidence.

    PubMed

    Vogel, Daniela; Harendza, Sigrid

    2016-01-01

    Practical skills are an essential part of physicians' daily routine. Nevertheless, medical graduates' performance of basic skills is often below the expected level. This review aims to identify and summarize teaching approaches of basic practical skills in undergraduate medical education which provide evidence with respect to effective students' learning of these skills. Basic practical skills were defined as basic physical examination skills, routine skills which get better with practice, and skills which are also performed by nurses. We searched PubMed with different terms describing these basic practical skills. In total, 3467 identified publications were screened and 205 articles were eventually reviewed for eligibility. 43 studies that included at least one basic practical skill, a comparison of two groups of undergraduate medical students and effects on students' performance were analyzed. Seven basic practical skills and 15 different teaching methods could be identified. The most consistent results with respect to effective teaching and acquisition of basic practical skills were found for structured skills training, feedback, and self-directed learning. Simulation was effective with specific teaching methods and in several studies no differences in teaching effects were detected between expert or peer instructors. Multimedia instruction, when used in the right setting, also showed beneficial effects for basic practical skills learning. A combination of voluntary or obligatory self-study with multimedia applications like video clips in combination with a structured program including the possibility for individual exercise with personal feedback by peers or teachers might provide a good learning opportunity for basic practical skills.

  20. Basic practical skills teaching and learning in undergraduate medical education – a review on methodological evidence

    PubMed Central

    Vogel, Daniela; Harendza, Sigrid

    2016-01-01

    Objective: Practical skills are an essential part of physicians’ daily routine. Nevertheless, medical graduates’ performance of basic skills is often below the expected level. This review aims to identify and summarize teaching approaches of basic practical skills in undergraduate medical education which provide evidence with respect to effective students’ learning of these skills. Methods: Basic practical skills were defined as basic physical examination skills, routine skills which get better with practice, and skills which are also performed by nurses. We searched PubMed with different terms describing these basic practical skills. In total, 3467 identified publications were screened and 205 articles were eventually reviewed for eligibility. Results: 43 studies that included at least one basic practical skill, a comparison of two groups of undergraduate medical students and effects on students’ performance were analyzed. Seven basic practical skills and 15 different teaching methods could be identified. The most consistent results with respect to effective teaching and acquisition of basic practical skills were found for structured skills training, feedback, and self-directed learning. Simulation was effective with specific teaching methods and in several studies no differences in teaching effects were detected between expert or peer instructors. Multimedia instruction, when used in the right setting, also showed beneficial effects for basic practical skills learning. Conclusion: A combination of voluntary or obligatory self-study with multimedia applications like video clips in combination with a structured program including the possibility for individual exercise with personal feedback by peers or teachers might provide a good learning opportunity for basic practical skills. PMID:27579364

  1. Wobbly strings: calculating the capture rate of a webcam using the rolling shutter effect in a guitar

    NASA Astrophysics Data System (ADS)

    Cunnah, David

    2014-07-01

    In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.

  2. Wobbly Strings: Calculating the Capture Rate of a Webcam Using the Rolling Shutter Effect in a Guitar

    ERIC Educational Resources Information Center

    Cunnah, David

    2014-01-01

    In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.

  3. Graphing Calculators in the Secondary Mathematics Classroom. Monograph #21.

    ERIC Educational Resources Information Center

    Eckert, Paul; And Others

    The objective of this presentation is to focus on the use of a hand-held graphics calculator. The specific machine referred to in this monograph is the Casio fx-7000G, chosen because of its low cost, its large viewing screen, its versatility, and its simple operation. Sections include: (1) "Basic Operations with the Casio fx-7000G"; (2) "Graphical…

  4. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  5. SteamTablesGrid: An ActiveX control for thermodynamic properties of pure water

    NASA Astrophysics Data System (ADS)

    Verma, Mahendra P.

    2011-04-01

    An ActiveX control, steam tables grid ( StmTblGrd) to speed up the calculation of the thermodynamic properties of pure water is developed. First, it creates a grid (matrix) for a specified range of temperature (e.g. 400-600 K with 40 segments) and pressure (e.g. 100,000-20,000,000 Pa with 40 segments). Using the ActiveX component SteamTables, the values of selected properties of water for each element (nodal point) of the 41×41 matrix are calculated. The created grid can be saved in a file for its reuse. A linear interpolation within an individual phase, vapor or liquid is implemented to calculate the properties at a given value of temperature and pressure. A demonstration program to illustrate the functionality of StmTblGrd is written in Visual Basic 6.0. Similarly, a methodology is presented to explain the use of StmTblGrd in MS-Excel 2007. In an Excel worksheet, the enthalpy of 1000 random datasets for temperature and pressure is calculated using StmTblGrd and SteamTables. The uncertainty in the enthalpy calculated with StmTblGrd is within ±0.03%. The calculations were performed on a personal computer that has a "Pentium(R) 4 CPU 3.2 GHz, RAM 1.0 GB" processor and Windows XP. The total execution time for the calculation with StmTblGrd was 0.3 s, while it was 60.0 s for SteamTables. Thus, the ActiveX control approach is reliable, accurate and efficient for the numerical simulation of complex systems that demand the thermodynamic properties of water at several values of temperature and pressure like steam flow in a geothermal pipeline network.

  6. If Only Newton Had a Rocket.

    ERIC Educational Resources Information Center

    Hammock, Frank M.

    1988-01-01

    Shows how model rocketry can be included in physics curricula. Describes rocket construction, a rocket guide sheet, calculations and launch teams. Discusses the relationships of basic mechanics with rockets. (CW)

  7. New vistas in refractive laser beam shaping with an analytic design approach

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Thienpont, Hugo

    2014-05-01

    Many commercial, medical and scientific applications of the laser have been developed since its invention. Some of these applications require a specific beam irradiance distribution to ensure optimal performance. Often, it is possible to apply geometrical methods to design laser beam shapers. This common design approach is based on the ray mapping between the input plane and the output beam. Geometric ray mapping designs with two plano-aspheric lenses have been thoroughly studied in the past. Even though analytic expressions for various ray mapping functions do exist, the surface profiles of the lenses are still calculated numerically. In this work, we present an alternative novel design approach that allows direct calculation of the rotational symmetric lens profiles described by analytic functions. Starting from the example of a basic beam expander, a set of functional differential equations is derived from Fermat's principle. This formalism allows calculating the exact lens profiles described by Taylor series coefficients up to very high orders. To demonstrate the versatility of this new approach, two further cases are solved: a Gaussian to at-top irradiance beam shaping system, and a beam shaping system that generates a more complex dark-hollow Gaussian (donut-like) irradiance profile with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of all calculated solutions and indicate the potential of this design approach for refractive beam shaping applications.

  8. Calculation of effective plutonium cross sections and check against the oscillation experiment CESAR-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaal, H.; Bernnat, W.

    1987-10-01

    For calculations of high-temperature gas-cooled reactors with low-enrichment fuel, it is important to know the plutonium cross sections accurately. Therefore, a calculational method was developed, by which the plutonium cross-section data of the ENDF/B-IV library can be examined. This method uses zero- and one-dimensional neutron transport calculations to collapse the basic data into one-group cross sections, which then can be compared with experimental values obtained from integral tests. For comparison the data from the critical experiment CESAR-II of the Centre d'Etudes Nucleaires, Cadarache, France, were utilized.

  9. Impact Ignition and Combustion Behavior of Amorphous Metal-Based Reactive Composites

    NASA Astrophysics Data System (ADS)

    Mason, Benjamin; Groven, Lori; Son, Steven

    2013-06-01

    Recently published molecular dynamic simulations have shown that metal-based reactive powder composites consisting of at least one amorphous component could lead to improved reaction performance due to amorphous materials having a zero heat of fusion, in addition to having high energy densities and potential uses such as structural energetic materials and enhanced blast materials. In order to investigate the feasibility of these systems, thermochemical equilibrium calculations were performed on various amorphous metal/metalloid based reactive systems with an emphasis on commercially available or easily manufactured amorphous metals, such as Zr and Ti based amorphous alloys in combination with carbon, boron, and aluminum. Based on the calculations and material availability material combinations were chosen. Initial materials were either mixed via a Resodyn mixer or mechanically activated using high energy ball milling where the microstructure of the milled material was characterized using x-ray diffraction, optical microscopy and scanning electron microscopy. The mechanical impact response and combustion behavior of select reactive systems was characterized using the Asay shear impact experiment where impact ignition thresholds, ignition delays, combustion velocities, and temperatures were quantified, and reported. Funding from the Defense Threat Reduction Agency (DTRA), Grant Number HDTRA1-10-1-0119. Counter-WMD basic research program, Dr. Suhithi M. Peiris, program director is gratefully acknowledged.

  10. Precise identification and manipulation of adsorption geometry of donor-π-acceptor dye on nanocrystalline TiO₂ films for improved photovoltaics.

    PubMed

    Zhang, Fan; Ma, Wei; Jiao, Yang; Wang, Jingchuan; Shan, Xinyan; Li, Hui; Lu, Xinghua; Meng, Sheng

    2014-12-24

    Adsorption geometry of dye molecules on nanocrystalline TiO2 plays a central role in dye-sensitized solar cells, enabling effective sunlight absorption, fast electron injection, optimized interface band offsets, and stable photovoltaic performance. However, precise determination of dye binding geometry and proportion has been challenging due to complexity and sensitivity at interfaces. Here employing combined vibrational spectrometry and density functional calculations, we identify typical adsorption configurations of widely adopted cyanoacrylic donor-π bridge-acceptor dyes on nanocrystalline TiO2. Binding mode switching from bidentate bridging to hydrogen-bonded monodentate configuration with Ti-N bonding has been observed when dye-sensitizing solution becomes more basic. Raman and infrared spectroscopy measurements confirm this configuration switch and determine quantitatively the proportion of competing binding geometries, with vibration peaks assigned using density functional theory calculations. We further found that the proportion of dye-binding configurations can be manipulated by adjusting pH value of dye-sensitizing solutions. Controlling molecular adsorption density and configurations led to enhanced energy conversion efficiency from 2.4% to 6.1% for the fabricated dye-sensitized solar cells, providing a simple method to improve photovoltaic performance by suppressing unfavorable binding configurations in solar cell applications.

  11. Investigation of a vibration-damping unit for reduction in low-frequency vibrations of electric motors

    NASA Technical Reports Server (NTRS)

    Grigoryey, N. V.; Fedorovich, M. A.

    1973-01-01

    The vibroacoustical characteristics of different types of electric motors are discussed. It is shown that the basic source of low frequency vibrations is rotor unbalance. A flexible damping support, with an antivibrator, is used to obtain the vibroacoustical effect of reduction in the basic harmonic of the electric motor. A model of the electric motor and the damping apparatus is presented. Mathematical models are developed to show the relationships of the parameters. The basic purpose in using a calculation model id the simultaneous replacement of the exciting force created by the rotor unbalance and its inertial rigidity characteristics by a limiting kinematic disturbance.

  12. Retention of basic life support knowledge, self-efficacy and chest compression performance in Thai undergraduate nursing students.

    PubMed

    Partiprajak, Suphamas; Thongpo, Pichaya

    2016-01-01

    This study explored the retention of basic life support knowledge, self-efficacy, and chest compression performance among Thai nursing students at a university in Thailand. A one-group, pre-test and post-test design time series was used. Participants were 30 nursing students undertaking basic life support training as a care provider. Repeated measure analysis of variance was used to test the retention of knowledge and self-efficacy between pre-test, immediate post-test, and re-test after 3 months. A Wilcoxon signed-rank test was used to compare the difference in chest compression performance two times. Basic life support knowledge was measured using the Basic Life Support Standard Test for Cognitive Knowledge. Self-efficacy was measured using the Basic Life Support Self-Efficacy Questionnaire. Chest compression performance was evaluated using a data printout from Resusci Anne and Laerdal skillmeter within two cycles. The training had an immediate significant effect on the knowledge, self-efficacy, and skill of chest compression; however, the knowledge and self-efficacy significantly declined after post-training for 3 months. Chest compression performance after training for 3 months was positively retaining compared to the first post-test but was not significant. Therefore, a retraining program to maintain knowledge and self-efficacy for a longer period of time should be established after post-training for 3 months. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Calculation and interpolation of the characteristics of the hydrodynamic journal bearings in the domain of possible movements of the rotor journals

    NASA Astrophysics Data System (ADS)

    Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.

    2016-10-01

    To visualize the physical processes that occur in the journal bearings of the shafting of power generating turbosets, a technique for preliminary calculation of a set of characteristics of the journal bearings in the domain of possible movements (DPM) of the rotor journals is proposed. The technique is based on interpolation of the oil film characteristics and is designed for use in real-time diagnostic system COMPACS®. According to this technique, for each journal bearing, the domain of possible movement of the shaft journal is computed, then triangulation of the area is performed, and the corresponding mesh is constructed. At each node of the mesh, all characteristics of the journal bearing required by the diagnostic system are calculated. Via shaft-position sensors, the system measures—in the online mode—the instantaneous location of the shaft journal in the bearing and determines the averaged static position of the journals (the pivoting vector). Afterwards, continuous interpolation in the triangulation domain is performed, which allows the real-time calculation of the static and dynamic forces that act on the rotor journal, the flow rate and the temperature of the lubricant, and power friction losses. Use of the proposed method on a running turboset enables diagnosing the technical condition of the shafting support system and promptly identifying the defects that determine the vibrational state and the overall reliability of the turboset. The authors report a number of examples of constructing the DPM and computing the basic static characteristics for elliptical journal bearings typical of large-scale power turbosets. To illustrate the interpolation method, the traditional approach to calculation of bearing properties is applied. This approach is based on a Reynolds two-dimensional isothermal equation that accounts for the mobility of the boundary of the oil film continuity.

  14. Computer Program For Linear Algebra

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  15. Undergraduate basic science preparation for dental school.

    PubMed

    Humphrey, Sue P; Mathews, Robert E; Kaplan, Alan L; Beeman, Cynthia S

    2002-11-01

    In the Institute of Medicines report Dental Education at the Crossroads, it was suggested that dental schools across the country move toward integrated basic science education for dental and medical students in their curricula. To do so, dental school admission requirements and recommendations must be closely reviewed to ensure that students are adequately prepared for this coursework. The purpose of our study was twofold: 1) to identify student dentists' perceptions of their predental preparation as it relates to course content, and 2) to track student dentists' undergraduate basic science course preparation and relate that to DAT performance, basic science course performance in dental school, and Part I and Part II National Board performance. In the first part of the research, a total of ninety student dentists (forty-five from each class) from the entering classes of 1996 and 1997 were asked to respond to a survey. The survey instrument was distributed to each class of students after each completed the largest basic science class given in their second-year curriculum. The survey investigated the area of undergraduate major, a checklist of courses completed in their undergraduate preparation, the relevance of the undergraduate classes to the block basic science courses, and the strength of requiring or recommending the listed undergraduate courses as a part of admission to dental school. Results of the survey, using frequency analysis, indicate that students felt that the following classes should be required, not recommended, for admission to dental school: Microbiology 70 percent, Biochemistry 54.4 percent, Immunology 57.78 percent, Anatomy 50 percent, Physiology 58.89 percent, and Cell Biology 50 percent. The second part of the research involved anonymously tracking undergraduate basic science preparation of the same students with DAT scores, the grade received in a representative large basic science course, and Part I and Part II National Board performance. Using T-test analysis correlations, results indicate that having completed multiple undergraduate basic science courses (as reported by AADSAS BCP hours) did not significantly (p < .05) enhance student performance in any of these parameters. Based on these results, we conclude that student dentists with undergraduate preparation in science and nonscience majors can successfully negotiate the dental school curriculum, even though the students themselves would increase admission requirements to include more basic science courses than commonly required. Basically, the students' recommendations for required undergraduate basic science courses would replicate the standard basic science coursework found in most dental schools: anatomy, histology, biochemistry, microbiology, physiology, and immunology plus the universal foundation course of biology.

  16. Nuclear Reactor Physics

    NASA Astrophysics Data System (ADS)

    Stacey, Weston M.

    2001-02-01

    An authoritative textbook and up-to-date professional's guide to basic and advanced principles and practices Nuclear reactors now account for a significant portion of the electrical power generated worldwide. At the same time, the past few decades have seen an ever-increasing number of industrial, medical, military, and research applications for nuclear reactors. Nuclear reactor physics is the core discipline of nuclear engineering, and as the first comprehensive textbook and reference on basic and advanced nuclear reactor physics to appear in a quarter century, this book fills a large gap in the professional literature. Nuclear Reactor Physics is a textbook for students new to the subject, for others who need a basic understanding of how nuclear reactors work, as well as for those who are, or wish to become, specialists in nuclear reactor physics and reactor physics computations. It is also a valuable resource for engineers responsible for the operation of nuclear reactors. Dr. Weston Stacey begins with clear presentations of the basic physical principles, nuclear data, and computational methodology needed to understand both the static and dynamic behaviors of nuclear reactors. This is followed by in-depth discussions of advanced concepts, including extensive treatment of neutron transport computational methods. As an aid to comprehension and quick mastery of computational skills, he provides numerous examples illustrating step-by-step procedures for performing the calculations described and chapter-end problems. Nuclear Reactor Physics is a useful textbook and working reference. It is an excellent self-teaching guide for research scientists, engineers, and technicians involved in industrial, research, and military applications of nuclear reactors, as well as government regulators who wish to increase their understanding of nuclear reactors.

  17. The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.

  18. Searching for high-K isomers in the proton-rich A ˜ 80 mass region

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Jun; Jiao, Chang-Feng; Gao, Yuan; Xu, Fu-Rong

    2016-09-01

    Configuration-constrained potential-energy-surface calculations have been performed to investigate the K isomerism in the proton-rich A ˜ 80 mass region. An abundance of high-K states are predicted. These high-K states arise from two and four-quasi-particle excitations, with Kπ = 8+ and Kπ = 16+, respectively. Their excitation energies are comparatively low, making them good candidates for long-lived isomers. Since most nuclei under study are prolate spheroids in their ground states, the oblate shapes of the predicted high-K states may indicate a combination of K isomerism and shape isomerism. Supported by National Key Basic Research Program of China (2013CB834402) and National Natural Science Foundation of China (11235001, 11320101004 and 11575007)

  19. Control systems for platform landings cushioned by air bags

    NASA Astrophysics Data System (ADS)

    Ross, Edward W.

    1987-07-01

    This report presents an exploratory mathematical study of control systems for airdrop platform landings cushioned by airbags. The basic theory of airbags is reviewed and solutions to special cases are noted. A computer program is presented, which calculates the time-dependence of the principal variables during a landing under the action of various control systems. Two existing control systems of open-loop type are compared with a conceptual feedback (closed-loop) system for a fairly typical set of landing conditions. The feedback controller is shown to have performance much superior to the other systems. The feedback system undergoes an interesting oscillation not present in the other systems, the source of which is investigated. Recommendations for future work are included.

  20. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  1. A manual control theory analysis of vertical situation displays for STOL aircraft

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1973-01-01

    Pilot-vehicle-display systems theory is applied to the analysis of proposed vertical situation displays for manual control in approach-to-landing of a STOL aircraft. The effects of display variables on pilot workload and on total closed-loop system performance was calculated using an optimal-control model for the human operator. The steep approach of an augmentor wing jet STOL aircraft was analyzed. Both random turbulence and mean-wind shears were considered. Linearized perturbation equations were used to describe longitudinal and lateral dynamics of the aircraft. The basic display configuration was one that abstracted the essential status information (including glide-slope and localizer errors) of an EADI display. Proposed flight director displays for both longitudinal and lateral control were also investigated.

  2. Comparison of measured and calculated forces on the RE-1000 free-piston Stirling engine displacer

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey G.

    1987-01-01

    The NASA Lewis Research Center has tested a 1 kW free-piston Stirling engine at the NASA Lewis test facilities. The tests performed over the past several years on the RE-1000 single cylinder engine are known as the sensitivity tests. This report presents an analysis of some of the data published in the sensitivity test report. A basic investigation into the measured forces acting on the unconstrained displacer of the engine is presented. These measured forces are then correlated with the values predicted by the NASA Lewis Stirling engine computer simulation. The results of the investigation are presented in the form of phasor diagrams. Possible future work resulting from this investigation is outlined.

  3. Possibility designing half-wave and full-wave molecular rectifiers by using single benzene molecule

    NASA Astrophysics Data System (ADS)

    Abbas, Mohammed A.; Hanoon, Falah H.; Al-Badry, Lafy F.

    2018-02-01

    This work focused on possibility designing half-wave and full-wave molecular rectifiers by using single and two benzene rings, respectively. The benzene rings were threaded by a magnetic flux that changes over time. The quantum interference effect was considered as the basic idea in the rectification action, the para and meta configurations were investigated. All the calculations are performed by using steady-state theoretical model, which is based on the time-dependent Hamiltonian model. The electrical conductance and the electric current are considered as DC output signals of half-wave and full-wave molecular rectifiers. The finding in this work opens up the exciting potential to use these molecular rectifiers in molecular electronics.

  4. Basic gait analysis based on continuous wave radar.

    PubMed

    Zhang, Jun

    2012-09-01

    A gait analysis method based on continuous wave (CW) radar is proposed in this paper. Time-frequency analysis is used to analyze the radar micro-Doppler echo from walking humans, and the relationships between the time-frequency spectrogram and human biological gait are discussed. The methods for extracting the gait parameters from the spectrogram are studied in depth and experiments on more than twenty subjects have been performed to acquire the radar gait data. The gait parameters are calculated and compared. The gait difference between men and women are presented based on the experimental data and extracted features. Gait analysis based on CW radar will provide a new method for clinical diagnosis and therapy. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Reliability of the quench protection system for the LHC superconducting elements

    NASA Astrophysics Data System (ADS)

    Vergara Fernández, A.; Rodríguez-Mateos, F.

    2004-06-01

    The Quench Protection System (QPS) is the sole system in the Large Hadron Collider machine monitoring the signals from the superconducting elements (bus bars, current leads, magnets) which form the cold part of the electrical circuits. The basic functions to be accomplished by the QPS during the machine operation will be briefly presented. With more than 4000 internal trigger channels (quench detectors and others), the final QPS design is the result of an optimised balance between on-demand availability and false quench reliability. The built-in redundancy for the different equipment will be presented, focusing on the calculated, expected number of missed quenches and false quenches. Maintenance strategies in order to improve the performance over the years of operation will be addressed.

  6. Development of KRISS standard reference photometer (SRP) for ambient ozone measurement

    NASA Astrophysics Data System (ADS)

    Lee, S.; Lee, J.

    2014-12-01

    Surface ozone has adverse impacts on human health and ecosystem. Accurate measurement of ambient ozone concentration is essential for developing effective mitigation strategies and understanding atmospheric chemistry. Korea Research Institute of Standards and Science (KRISS) has developed new ozone standard reference photometers (SRPs) for the calibration of ambient ozone instruments. The basic principle of the KRISS ozone SRPs is to determine the absorption of ultraviolet radiation at a specific wavelength, 253.7 nm, by ozone in the atmosphere. Ozone concentration is calculated by converting UV transmittance through the Beer-Lambert Law. This study introduces the newly developed ozone SRPs and characterizes their performance through uncertainty analysis and comparison with BIPM (International Bureau of Weights and Measures) SRP.

  7. Sensitivity analysis of physiological factors in space habitat design

    NASA Technical Reports Server (NTRS)

    Billingham, J.

    1982-01-01

    The costs incurred by design conservatism in space habitat design are discussed from a structural standpoint, and areas of physiological research into less than earth-normal conditions that offer the greatest potential decrease in habitat construction and operating costs are studied. The established range of human tolerance limits is defined for those physiological conditions which directly affect habitat structural design. These entire ranges or portions thereof are set as habitat design constraints as a function of habitat population and degree of ecological closure. Calculations are performed to determine the structural weight and cost associated with each discrete population size and its selected environmental conditions, on the basis of habitable volume equivalence for four basic habitat configurations: sphere, cylinder with hemispherical ends, torus, and crystal palace.

  8. A New Reynolds Stress Algebraic Equation Model

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    A general turbulent constitutive relation is directly applied to propose a new Reynolds stress algebraic equation model. In the development of this model, the constraints based on rapid distortion theory and realizability (i.e. the positivity of the normal Reynolds stresses and the Schwarz' inequality between turbulent velocity correlations) are imposed. Model coefficients are calibrated using well-studied basic flows such as homogeneous shear flow and the surface flow in the inertial sublayer. The performance of this model is then tested in complex turbulent flows including the separated flow over a backward-facing step and the flow in a confined jet. The calculation results are encouraging and point to the success of the present model in modeling turbulent flows with complex geometries.

  9. Performance of First-Tour WAC Enlisted Women: Data Base for the Performance Orientation of Women's Basic Training. Final Report.

    ERIC Educational Resources Information Center

    Boyd, H. Alton; And Others

    The introduction of performance-oriented instructional procedures into Women's Basic Training (BT) at Fort McClellan and the revision of Army Training Program 21-121 to incorporate the philosophy and principles of performance-oriented training are described in the document. Results from a questionnaire regarding duties, activities, and attitudes…

  10. Emitter location errors in electronic recognition system

    NASA Astrophysics Data System (ADS)

    Matuszewski, Jan; Dikta, Anna

    2017-04-01

    The paper describes some of the problems associated with emitter location calculations. This aspect is the most important part of the series of tasks in the electronic recognition systems. The basic tasks include: detection of emission of electromagnetic signals, tracking (determining the direction of emitter sources), signal analysis in order to classify different emitter types and the identification of the sources of emission of the same type. The paper presents a brief description of the main methods of emitter localization and the basic mathematical formulae for calculating their location. The errors' estimation has been made to determine the emitter location for three different methods and different scenarios of emitters and direction finding (DF) sensors deployment in the electromagnetic environment. The emitter has been established using a special computer program. On the basis of extensive numerical calculations, the evaluation of precise emitter location in the recognition systems for different configuration alignment of bearing devices and emitter was conducted. The calculations which have been made based on the simulated data for different methods of location are presented in the figures and respective tables. The obtained results demonstrate that calculation of the precise emitter location depends on: the number of DF sensors, the distances between emitter and DF sensors, their mutual location in the reconnaissance area and bearing errors. The precise emitter location varies depending on the number of obtained bearings. The higher the number of bearings, the better the accuracy of calculated emitter location in spite of relatively high bearing errors for each DF sensor.

  11. Plastic Surgery Undergraduate Training: How a Single Local Event Can Inspire and Educate Medical Students.

    PubMed

    Khatib, Manaf; Soukup, Benjamin; Boughton, Oliver; Amin, Kavit; Davis, Christopher R; Evans, David M

    2015-08-01

    Plastic surgery teaching has a limited role in the undergraduate curriculum. We held a 1-day national course in plastic surgery for undergraduates. Our aim was to introduce delegates to plastic surgery and teach basic plastic surgical skills. We assessed change in perceptions of plastic surgery and change in confidence in basic plastic surgical skills. The day consisted of consultant-led lectures followed by workshops in aesthetic suturing, local flap design, and tendon repair. A questionnaire divided into 3 sections, namely, (1) career plans, (2) perceptions of plastic surgery, and (3) surgical skills and knowledge, was completed by 39 delegates before and after the course. Results were presented as mean scores and the standard error of the mean used to calculate data spread. Data were analyzed using the Mann-Whitney U test for nonparametric data. Career plans: Interest in pursuing a plastic surgery career significantly increased over the course of the day by 12.5% (P < 0.0005).Perceptions: Statistically significant changes were observed in many categories of plastic surgery, including the perception of the role of plastic surgeons in improving patient quality of life, increased by 18.31% (P = 0.063). Before the course 10% of delegates perceived plastic surgery to be a superficial discipline and 20% perceived that plastic surgeons did not save lives. After completing the course, no delegates held those views.Surgical skills: Confidence to perform subcuticular and deep dermal sutures improved by 53% (P < 0.0001) and 57% (P < 0.0001), respectively. Delegates' subjective understanding of the basic geometry of local flaps improved by 94% (P < 0.0001). Interestingly, before the course, 2.5% of delegates drew an accurate modified Kessler suture compared with 87% of on completion of the course. A 1-day intensive undergraduate plastic surgery course can significantly increase delegates' desire to pursue a career in plastic surgery, dispel common misconceptions about this field, and increase their confidence in performing the taught skills. The results of this course demonstrate that a 1-day course is an effective means of teaching basic plastic surgery skills to undergraduates and highlights the potential role for local plastic surgery departments in advancing plastic surgery education.

  12. Chemical Mixture Risk Assessment Additivity-Based Approaches

    EPA Science Inventory

    Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.

  13. A Simple Case Study of a Grid Performance System

    NASA Technical Reports Server (NTRS)

    Aydt, Ruth; Gunter, Dan; Quesnel, Darcy; Smith, Warren; Taylor, Valerie; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This document presents a simple case study of a Grid performance system based on the Grid Monitoring Architecture (GMA) being developed by the Grid Forum Performance Working Group. It describes how the various system components would interact for a very basic monitoring scenario, and is intended to introduce people to the terminology and concepts presented in greater detail in other Working Group documents. We believe that by focusing on the simple case first, working group members can familiarize themselves with terminology and concepts, and productively join in the ongoing discussions of the group. In addition, prototype implementations of this basic scenario can be built to explore the feasibility of the proposed architecture and to expose possible shortcomings. Once the simple case is understood and agreed upon, complexities can be added incrementally as warranted by cases not addressed in the most basic implementation described here. Following the basic performance monitoring scenario discussion, unresolved issues are introduced for future discussion.

  14. An Empirical Determination of Tasks Essential to Successful Performance as a Chemical Applicator. Determination of a Common Core of Basic Skills in Agribusiness and Natural Resources.

    ERIC Educational Resources Information Center

    Miller, Daniel R.; And Others

    To improve vocational educational programs in agriculture, occupational information on a common core of basic skills within the occupational area of the chemical applicator is presented in the revised task inventory survey. The purpose of the occupational survey was to identify a common core of basic skills which are performed and are essential…

  15. Basic and advanced numerical performances relate to mathematical expertise but are fully mediated by visuospatial skills.

    PubMed

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-09-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Mental Calculation Methods Used by 11-Year-Olds in Different Attainment Bands: A Reanalysis of Data from the 1987 APU Survey in the UK.

    ERIC Educational Resources Information Center

    Foxman, Derek; Beishuizen, Meindert

    2002-01-01

    Reanalyzes data obtained in 1987 on mental calculation strategies used by 11-year-olds in England, Wales, and Northern Ireland. Classifies mental strategies developed in the past decade in international research. Compares frequency and effectiveness of the strategies used by pupils of different levels of attainment. Discusses basic arithmetic…

  17. The relevance of basic sciences in undergraduate medical education.

    PubMed

    Lynch, C; Grant, T; McLoughlin, P; Last, J

    2016-02-01

    Evolving and changing undergraduate medical curricula raise concerns that there will no longer be a place for basic sciences. National and international trends show that 5-year programmes with a pre-requisite for school chemistry are growing more prevalent. National reports in Ireland show a decline in the availability of school chemistry and physics. This observational cohort study considers if the basic sciences of physics, chemistry and biology should be a prerequisite to entering medical school, be part of the core medical curriculum or if they have a place in the practice of medicine. Comparisons of means, correlation and linear regression analysis assessed the degree of association between predictors (school and university basic sciences) and outcomes (year and degree GPA) for entrants to a 6-year Irish medical programme between 2006 and 2009 (n = 352). We found no statistically significant difference in medical programme performance between students with/without prior basic science knowledge. The Irish school exit exam and its components were mainly weak predictors of performance (-0.043 ≥ r ≤ 0.396). Success in year one of medicine, which includes a basic science curriculum, was indicative of later success (0.194 ≥ r (2) ≤ 0.534). University basic sciences were found to be more predictive than school sciences in undergraduate medical performance in our institution. The increasing emphasis of basic sciences in medical practice and the declining availability of school sciences should mandate medical schools in Ireland to consider how removing basic sciences from the curriculum might impact on future applicants.

  18. An Inexpensive Predictor of Student Performance on Licensure Examinations.

    ERIC Educational Resources Information Center

    Hyde, R. M.; And Others

    1987-01-01

    The construction of a comprehensive final examination over the basic medical sciences is described. Performance on the exam was a better predictor of NBME-I scores than GPA in basic science or MCAT scores and a better predictor of NBME-II scores than preclinical course performance and MCAT scores. (Author/RH)

  19. Target acquisition modeling over the exact optical path: extending the EOSTAR TDA with the TOD sensor performance model

    NASA Astrophysics Data System (ADS)

    Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.

    2017-10-01

    The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.

  20. Real-time particle tracking for studying intracellular trafficking of pharmaceutical nanocarriers.

    PubMed

    Huang, Feiran; Watson, Erin; Dempsey, Christopher; Suh, Junghae

    2013-01-01

    Real-time particle tracking is a technique that combines fluorescence microscopy with object tracking and computing and can be used to extract quantitative transport parameters for small particles inside cells. Since the success of a nanocarrier can often be determined by how effectively it delivers cargo to the target organelle, understanding the complex intracellular transport of pharmaceutical nanocarriers is critical. Real-time particle tracking provides insight into the dynamics of the intracellular behavior of nanoparticles, which may lead to significant improvements in the design and development of novel delivery systems. Unfortunately, this technique is not often fully understood, limiting its implementation by researchers in the field of nanomedicine. In this chapter, one of the most complicated aspects of particle tracking, the mean square displacement (MSD) calculation, is explained in a simple manner designed for the novice particle tracker. Pseudo code for performing the MSD calculation in MATLAB is also provided. This chapter contains clear and comprehensive instructions for a series of basic procedures in the technique of particle tracking. Instructions for performing confocal microscopy of nanoparticle samples are provided, and two methods of determining particle trajectories that do not require commercial particle-tracking software are provided. Trajectory analysis and determination of the tracking resolution are also explained. By providing comprehensive instructions needed to perform particle-tracking experiments, this chapter will enable researchers to gain new insight into the intracellular dynamics of nanocarriers, potentially leading to the development of more effective and intelligent therapeutic delivery vectors.

  1. Dosimetry of 192Ir sources used for endovascular brachytherapy

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.

    2001-02-01

    An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.

  2. Is basic science disappearing from medicine? The decline of biomedical research in the medical literature.

    PubMed

    Steinberg, Benjamin E; Goldenberg, Neil M; Fairn, Gregory D; Kuebler, Wolfgang M; Slutsky, Arthur S; Lee, Warren L

    2016-02-01

    Explosive growth in our understanding of genomics and molecular biology have fueled calls for the pursuit of personalized medicine, the notion of harnessing biologic variability to provide patient-specific care. This vision will necessitate a deep understanding of the underlying pathophysiology in each patient. Medical journals play a pivotal role in the education of trainees and clinicians, yet we suspected that the amount of basic science in the top medical journals has been in decline. We conducted an automated search strategy in PubMed to identify basic science articles and calculated the proportion of articles dealing with basic science in the highest impact journals for 8 different medical specialties from 1994 to 2013. We observed a steep decline (40-60%) in such articles over time in almost all of the journals examined. This rapid decline in basic science from medical journals is likely to affect practitioners' understanding of and interest in the basic mechanisms of disease and therapy. In this Life Sciences Forum, we discuss why this decline may be occurring and what it means for the future of science and medicine. © FASEB.

  3. Optical basicity and polarizability for copper-zinc doped sol-gel glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaur, G., E-mail: gkapds@gmail.com; Pandey, O. P.; Amjotkaur,, E-mail: amjotkaur93@gmail.com

    2016-05-06

    CaO-SiO{sub 2}-B{sub 2}O{sub 3}-P{sub 2}O{sub 5} glasses have been studied by varying ratios of Copper oxide and Zinc oxide. Glasses were prepared using Sol-Gel technique. Opitical Basicity and oxide ion Polarizability were calculated and discussed in relation with non bridging Oxygen ions (NBOs). Optical basicity is average electron donating capability of an oxide atom. All glasses had a little difference in optical basicity and polarizability values but CZ8 glass (20CaO-60SiO{sub 2}-5B{sub 2}O{sub 3}-5P{sub 2}O{sub 5}-2CuO-8ZnO) came out to show highest optical basicity and polarizability with value 0.5177 and 0.9798 respectively. This showed the highest electron donating tendency of CZ8 glassmore » and highest number of NBOs. These were minimum for CZ2 glass with 8CuO and 2ZnO. In aspect of optical basicity and polarizability glasses follow the series CZ2 < CZ4 < CZ6 < CZ8. Increasing concentration of ZnO and decreasing concentration of CuO lead to higher optical basicity and oxide ion polarizability.« less

  4. Performance-Based Logistics Contracts: A Basic Overview

    DTIC Science & Technology

    2005-11-01

    world. The Navy began using PBL contracts in 1999, and since then, contract managers have reported improved availability and reduced customer wait...4825 Mark Center Drive • Alexandria, Virginia 22311-1850 CRM D0012881.A2/Final November 2005 Performance-Based Logistics Contracts: A Basic Overview...Performance-Based Logistics (PBL) contracts provide services or sup- port where the provider is held to customer -oriented performance requirements

  5. An Empirical Determination of Tasks Essential to Successful Performance as a Bulk Fertilizer Plant Worker. Determination of a Common Core of Basic Skills in Agribusiness and Natural Resources.

    ERIC Educational Resources Information Center

    Miller, Daniel R.; And Others

    To improve vocational educational programs in agriculture, occupational information on a common core of basic skills within the occupational area of the bulk fertilizer plant worker is presented in the revised task inventory survey. The purpose of the occupational survey was to identify a common core of basic skills which are performed and are…

  6. An Empirical Determination of Tasks Essential to Successful Performance as an Animal Health Assistant. Determination of a Common Core of Basic Skills in Agribusiness and Natural Resources.

    ERIC Educational Resources Information Center

    Cooke, Fred C.; And Others

    To improve vocational educational programs in agriculture, occupational information on a common core of basic skills within the occupational area of the animal health assistant is presented in the revised task inventory survey. The purpose of the occupational survey was to identify a common core of basic skills which are performed and are…

  7. An Empirical Determination of Tasks Essential to Successful Performance as a Swine Farmer. Determination of a Common Core of Basic Skills in Agribusiness and Natural Resources.

    ERIC Educational Resources Information Center

    Byrd, J. Rick; And Others

    To improve vocational educational programs in agriculture, occupational information on a common core of basic skills within the occupational area of the swine farmer is presented in the revised task inventory survey. The purpose of the occupational survey was to identify a common core of basic skills which are performed and are essential for…

  8. An Empirical Determination of Tasks Essential to Successful Performance as a Tree Service Worker. Determination of a Common Core of Basic Skills in Agribusiness and Natural Resources.

    ERIC Educational Resources Information Center

    Waddy, Paul H.; And Others

    To improve vocational educational programs in agriculture, occupational information on a common core of basic skills within the occupational area of the tree service worker is presented in the revised task inventory survey. The purpose of the occupational survey was to identify a common core of basic skills which are performed and are essential…

  9. 40 CFR 98.198 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... GREENHOUSE GAS REPORTING Lime Manufacturing § 98.198 Definitions. All terms used in this subpart have the... to Subpart S of Part 98—Basic Parameters for the Calculation of Emission Factors for Lime Production...

  10. Bare Walls? Get Better Coverage from these Bulletin Boards.

    ERIC Educational Resources Information Center

    Bates, Sara; Wisler, Diane

    1983-01-01

    Five versatile basic backgrounds for bulletin boards are described. Many different options are suggested when using a sun, calculator, monkey, roller skate, or elephant as the basis for the bulletin board. (JMK)

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, V.K.; Patel, A.S.; Sharma, A.

    This paper presents the design of magnetic coil for relativistic magnetron (RM) for LIA (Linear Induction Accelerator)-400 systems. Vacuum improves the efficiency of RM for HPM generation. Magnetic field in RM is very critical parameter and should be nearly constant in the active region. Typical coils are helical in nature, which have multi turns of varying radius. Magnetic field calculation of such coils with basic equations of Helmholtz coils or solenoid with mean radius can only give estimation. Field computational softwares like CST require small mesh size and boundary at very far so consume large memory and take very muchmore » time. Helical coils are simplified such that the basic law of magnetic field calculation i.e. Bio-Savart law can be applied with less complexity. Pairs of spiral coils have been analyzed for magnetic field and Lorenz's force. The approach is field experimentally validated. (author)« less

  12. Basic biostatistics for post-graduate students

    PubMed Central

    Dakhale, Ganesh N.; Hiware, Sachin K.; Shinde, Abhijit T.; Mahatme, Mohini S.

    2012-01-01

    Statistical methods are important to draw valid conclusions from the obtained data. This article provides background information related to fundamental methods and techniques in biostatistics for the use of postgraduate students. Main focus is given to types of data, measurement of central variations and basic tests, which are useful for analysis of different types of observations. Few parameters like normal distribution, calculation of sample size, level of significance, null hypothesis, indices of variability, and different test are explained in detail by giving suitable examples. Using these guidelines, we are confident enough that postgraduate students will be able to classify distribution of data along with application of proper test. Information is also given regarding various free software programs and websites useful for calculations of statistics. Thus, postgraduate students will be benefitted in both ways whether they opt for academics or for industry. PMID:23087501

  13. The Web as an educational tool for/in learning/teaching bioinformatics statistics.

    PubMed

    Oliver, J; Pisano, M E; Alonso, T; Roca, P

    2005-12-01

    Statistics provides essential tool in Bioinformatics to interpret the results of a database search or for the management of enormous amounts of information provided from genomics, proteomics and metabolomics. The goal of this project was the development of a software tool that would be as simple as possible to demonstrate the use of the Bioinformatics statistics. Computer Simulation Methods (CSMs) developed using Microsoft Excel were chosen for their broad range of applications, immediate and easy formula calculation, immediate testing and easy graphics representation, and of general use and acceptance by the scientific community. The result of these endeavours is a set of utilities which can be accessed from the following URL: http://gmein.uib.es/bioinformatica/statistics. When tested on students with previous coursework with traditional statistical teaching methods, the general opinion/overall consensus was that Web-based instruction had numerous advantages, but traditional methods with manual calculations were also needed for their theory and practice. Once having mastered the basic statistical formulas, Excel spreadsheets and graphics were shown to be very useful for trying many parameters in a rapid fashion without having to perform tedious calculations. CSMs will be of great importance for the formation of the students and professionals in the field of bioinformatics, and for upcoming applications of self-learning and continuous formation.

  14. Structural, elastic, electronic, optical and thermoelectric properties of the Zintl-phase Ae3AlAs3 (Ae = Sr, Ba)

    NASA Astrophysics Data System (ADS)

    Benahmed, A.; Bouhemadou, A.; Alqarni, B.; Guechi, N.; Al-Douri, Y.; Khenata, R.; Bin-Omran, S.

    2018-05-01

    First-principles calculations were performed to investigate the structural, elastic, electronic, optical and thermoelectric properties of the Zintl-phase Ae3AlAs3 (Ae = Sr, Ba) using two complementary approaches based on density functional theory. The pseudopotential plane-wave method was used to explore the structural and elastic properties whereas the full-potential linearised augmented plane wave approach was used to study the structural, electronic, optical and thermoelectric properties. The calculated structural parameters are in good consistency with the corresponding measured ones. The single-crystal and polycrystalline elastic constants and related properties were examined in details. The electronic properties, including energy band dispersions, density of states and charge-carrier effective masses, were computed using Tran-Blaha modified Becke-Johnson functional for the exchange-correlation potential. It is found that both studied compounds are direct band gap semiconductors. Frequency-dependence of the linear optical functions were predicted for a wide photon energy range up to 15 eV. Charge carrier concentration and temperature dependences of the basic parameters of the thermoelectric properties were explored using the semi-classical Boltzmann transport model. Our calculations unveil that the studied compounds are characterised by a high thermopower for both carriers, especially the p-type conduction is more favourable.

  15. Pareto Joint Inversion of Love and Quasi Rayleigh's waves - synthetic study

    NASA Astrophysics Data System (ADS)

    Bogacz, Adrian; Dalton, David; Danek, Tomasz; Miernik, Katarzyna; Slawinski, Michael A.

    2017-04-01

    In this contribution the specific application of Pareto joint inversion in solving geophysical problem is presented. Pareto criterion combine with Particle Swarm Optimization were used to solve geophysical inverse problems for Love and Quasi Rayleigh's waves. Basic theory of forward problem calculation for chosen surface waves is described. To avoid computational problems some simplification were made. This operation allowed foster and more straightforward calculation without lost of solution generality. According to the solving scheme restrictions, considered model must have exact two layers, elastic isotropic surface layer and elastic isotropic half space with infinite thickness. The aim of the inversion is to obain elastic parameters and model geometry using dispersion data. In calculations different case were considered, such as different number of modes for different wave types and different frequencies. Created solutions are using OpenMP standard for parallel computing, which help in reduction of computational times. The results of experimental computations are presented and commented. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013, and by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  16. Computer programs for calculating potential flow in propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.; Button, S. L.

    1973-01-01

    In the course of designing inlets, particularly for VTOL and STOL propulsion systems, a calculational procedure utilizing three computer programs evolved. The chief program is the Douglas axisymmetric potential flow program called EOD which calculates the incompressible potential flow about arbitrary axisymmetric bodies. The other two programs, original with Lewis, are called SCIRCL AND COMBYN. Program SCIRCL generates input for EOD from various specified analytic shapes for the inlet components. Program COMBYN takes basic solutions output by EOD and combines them into solutions of interest, and applies a compressibility correction.

  17. Enhanced Basicity of Push-Pull Nitrogen Bases in the Gas Phase.

    PubMed

    Raczyńska, Ewa D; Gal, Jean-François; Maria, Pierre-Charles

    2016-11-23

    Nitrogen bases containing one or more pushing amino-group(s) directly linked to a pulling cyano, imino, or phosphoimino group, as well as those in which the pushing and pulling moieties are separated by a conjugated spacer (C═X) n , where X is CH or N, display an exceptionally strong basicity. The n-π conjugation between the pushing and pulling groups in such systems lowers the basicity of the pushing amino-group(s) and increases the basicity of the pulling cyano, imino, or phosphoimino group. In the gas phase, most of the so-called push-pull nitrogen bases exhibit a very high basicity. This paper presents an analysis of the exceptional gas-phase basicity, mostly in terms of experimental data, in relation with structure and conjugation of various subfamilies of push-pull nitrogen bases: nitriles, azoles, azines, amidines, guanidines, vinamidines, biguanides, and phosphazenes. The strong basicity of biomolecules containing a push-pull nitrogen substructure, such as bioamines, amino acids, and peptides containing push-pull side chains, nucleobases, and their nucleosides and nucleotides, is also analyzed. Progress and perspectives of experimental determinations of GBs and PAs of highly basic compounds, termed as "superbases", are presented and benchmarked on the basis of theoretical calculations on existing or hypothetical molecules.

  18. Increasing the efficiency of designing hemming processes by using an element-based metamodel approach

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Roll, K.; Volk, W.

    2017-09-01

    In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.

  19. Preliminary basic performance analysis of the Cedar multiprocessor memory system

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.

    1991-01-01

    Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.

  20. Fetal exposure to low frequency electric and magnetic fields

    NASA Astrophysics Data System (ADS)

    Cech, R.; Leitgeb, N.; Pediaditis, M.

    2007-02-01

    To investigate the interaction of low frequency electric and magnetic fields with pregnant women and in particular with the fetus, an anatomical voxel model of an 89 kg woman at week 30 of pregnancy was developed. Intracorporal electric current density distributions due to exposure to homogeneous 50 Hz electric and magnetic fields were calculated and results were compared with basic restrictions recommended by ICNIRP guidelines. It could be shown that the basic restriction is met within the central nervous system (CNS) of the mother at exposure to reference level of either electric or magnetic fields. However, within the fetus the basic restriction is considerably exceeded. Revision of reference levels might be necessary.

  1. Rapid Deterioration of Basic Life Support Skills in Dentists With Basic Life Support Healthcare Provider.

    PubMed

    Nogami, Kentaro; Taniguchi, Shogo; Ichiyama, Tomoko

    2016-01-01

    The aim of this study was to investigate the correlation between basic life support skills in dentists who had completed the American Heart Association's Basic Life Support (BLS) Healthcare Provider qualification and time since course completion. Thirty-six dentists who had completed the 2005 BLS Healthcare Provider course participated in the study. We asked participants to perform 2 cycles of cardiopulmonary resuscitation on a mannequin and evaluated basic life support skills. Dentists who had previously completed the BLS Healthcare Provider course displayed both prolonged reaction times, and the quality of their basic life support skills deteriorated rapidly. There were no correlations between basic life support skills and time since course completion. Our results suggest that basic life support skills deteriorate rapidly for dentists who have completed the BLS Healthcare Provider. Newer guidelines stressing chest compressions over ventilation may help improve performance over time, allowing better cardiopulmonary resuscitation in dental office emergencies. Moreover, it may be effective to provide a more specialized version of the life support course to train the dentists, stressing issues that may be more likely to occur in the dental office.

  2. Scoring the correlation of genes by their shared properties using OScal, an improved overlap quantification model.

    PubMed

    Liu, Hui; Liu, Wei; Lin, Ying; Liu, Teng; Ma, Zhaowu; Li, Mo; Zhang, Hong-Mei; Kenneth Wang, Qing; Guo, An-Yuan

    2015-05-27

    Scoring the correlation between two genes by their shared properties is a common and basic work in biological study. A prospective way to score this correlation is to quantify the overlap between the two sets of homogeneous properties of the two genes. However the proper model has not been decided, here we focused on studying the quantification of overlap and proposed a more effective model after theoretically compared 7 existing models. We defined three characteristic parameters (d, R, r) of an overlap, which highlight essential differences among the 7 models and grouped them into two classes. Then the pros and cons of the two groups of model were fully examined by their solution space in the (d, R, r) coordinate system. Finally we proposed a new model called OScal (Overlap Score calculator), which was modified on Poisson distribution (one of 7 models) to avoid its disadvantages. Tested in assessing gene relation using different data, OScal performs better than existing models. In addition, OScal is a basic mathematic model, with very low computation cost and few restrictive conditions, so it can be used in a wide-range of research areas to measure the overlap or similarity of two entities.

  3. Realistic absorption coefficient of each individual film in a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2015-02-01

    A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.

  4. Relationships of Mathematics Anxiety, Mathematics Self-Efficacy and Mathematics Performance of Adult Basic Education Students

    ERIC Educational Resources Information Center

    Watts, Beverly Kinsey

    2011-01-01

    Competent mathematical skills are needed in the workplace as well as in the college setting. Adults in Adult Basic Education classes and programs generally perform below high school level competency, but very few studies have been performed investigating the predictors of mathematical success for adults. The current study contributes to the…

  5. Effects of obligatory training and prior training experience on attitudes towards performing basic life support: a questionnaire survey.

    PubMed

    Matsubara, Hiroki; Enami, Miki; Hirose, Keiko; Kamikura, Takahisa; Nishi, Taiki; Takei, Yutaka; Inaba, Hideo

    2015-04-01

    To determine the effect of Japanese obligatory basic life support training for new driver's license applicants on their willingness to carry out basic life support. We distributed a questionnaire to 9,807 participants of basic life support courses in authorized driving schools from May 2007 to April 2008 after the release of the 2006 Japanese guidelines. The questionnaire explored the participants' willingness to perform basic life support in four hypothetical scenarios: cardiopulmonary resuscitation on one's own initiative; compression-only cardiopulmonary resuscitation following telephone cardiopulmonary resuscitation; early emergency call; and use of an automated external defibrillator. The questionnaire was given at the beginning of the basic life support course in the first 6-month term and at the end in the second 6-month term. The 9,011 fully completed answer sheets were analyzed. The training significantly increased the proportion of respondents willing to use an automated external defibrillator and to perform cardiopulmonary resuscitation on their own initiative in those with and without prior basic life support training experience. It significantly increased the proportion of respondents willing to carry out favorable actions in all four scenarios. In multiple logistic regression analysis, basic life support training and prior training experiences within 3 years were associated with the attitude. The analysis of reasons for unwillingness suggested that the training reduced the lack of confidence in their skill but did not attenuate the lack of confidence in detection of arrest or clinical judgment to initiate a basic life support action. Obligatory basic life support training should be carried out periodically and modified to ensure that participants gain confidence in judging and detecting cardiac arrest.

  6. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  7. Large Scale Many-Body Perturbation Theory calculations: methodological developments, data collections, validation

    NASA Astrophysics Data System (ADS)

    Govoni, Marco; Galli, Giulia

    Green's function based many-body perturbation theory (MBPT) methods are well established approaches to compute quasiparticle energies and electronic lifetimes. However, their application to large systems - for instance to heterogeneous systems, nanostructured, disordered, and defective materials - has been hindered by high computational costs. We will discuss recent MBPT methodological developments leading to an efficient formulation of electron-electron and electron-phonon interactions, and that can be applied to systems with thousands of electrons. Results using a formulation that does not require the explicit calculation of virtual states, nor the storage and inversion of large dielectric matrices will be presented. We will discuss data collections obtained using the WEST code, the advantages of the algorithms used in WEST over standard techniques, and the parallel performance. Work done in collaboration with I. Hamada, R. McAvoy, P. Scherpelz, and H. Zheng. This work was supported by MICCoM, as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division and by ANL.

  8. Prediction of dynamic and aerodynamic characteristics of the centrifugal fan with forward curved blades

    NASA Astrophysics Data System (ADS)

    Polanský, Jiří; Kalmár, László; Gášpár, Roman

    2013-12-01

    The main aim of this paper is determine the centrifugal fan with forward curved blades aerodynamic characteristics based on numerical modeling. Three variants of geometry were investigated. The first, basic "A" variant contains 12 blades. The geometry of second "B" variant contains 12 blades and 12 semi-blades with optimal length [1]. The third, control variant "C" contains 24 blades without semi-blades. Numerical calculations were performed by CFD Ansys. Another aim of this paper is to compare results of the numerical simulation with results of approximate numerical procedure. Applied approximate numerical procedure [2] is designated to determine characteristics of the turbulent flow in the bladed space of a centrifugal-flow fan impeller. This numerical method is an extension of the hydro-dynamical cascade theory for incompressible and inviscid fluid flow. Paper also partially compares results from the numerical simulation and results from the experimental investigation. Acoustic phenomena observed during experiment, during numerical simulation manifested as deterioration of the calculation stability, residuals oscillation and thus also as a flow field oscillation. Pressure pulsations are evaluated by using frequency analysis for each variant and working condition.

  9. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  10. A Z-number-based decision making procedure with ranking fuzzy numbers method

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2014-12-01

    The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.

  11. Green synthesis, structure and fluorescence spectra of new azacyanine dyes

    NASA Astrophysics Data System (ADS)

    Enchev, Venelin; Gadjev, Nikolai; Angelov, Ivan; Minkovska, Stela; Kurutos, Atanas; Markova, Nadezhda; Deligeorgiev, Todor

    2017-11-01

    A series of symmetric and unsymmetric monomethine azacyanine dyes (monomethine azacyanine and merocyanine sulfobetaines) were synthesized with moderate to high yields via a novel method using microwave irradiation. The compounds are derived from a condensation reaction between 2-thiomethylbenzotiazolium salts and 2-imino-3-methylbenzothiazolines proceeded under microwave irradiation. The synthetic approach involves the use of green solvent and absence of basic reagent. TD-DFT calculations were performed to simulate absorption and fluorescent spectra of synthesized dyes. Absorption maxima, λmax, of the studied dyes were found in the range 364-394 nm. Molar absorbtivities were evaluated in between 40300 and 59200 mol-1 dm3 cm-1. Fluorescence maxima, λfl, were registered around 418-448 nm upon excitation at 350 nm. A slight displacements of theoretically estimated absorption maxima according to experimental ones is observed. The differences are most probably due to the fact that the DFT calculations were carried out without taking into account the solvent effect. In addition, the merocyanine sulfobetaines also fluorescence in blue optical range (420-480 nm) at excitation in red range (630-650 nm).

  12. Thiolysis and alcoholysis of phosphate tri- and monoesters with alkyl and aryl leaving groups. An ab initio study in the gas phase.

    PubMed

    Arantes, Guilherme Menegon; Chaimovich, Hernan

    2005-06-30

    Phosphate esters are important compounds in living systems. Their biological reactions with alcohol and thiol nucleophiles are catalyzed by a large superfamily of phosphatase enzymes. However, very little is known about the intrinsic reactivity of these nucleophiles with phosphorus centers. We have performed ab initio calculations on the thiolysis and alcoholysis at phosphorus of trimethyl phosphate, dimethyl phenyl phosphate, methyl phosphate, and phenyl phosphate. Results in the gas phase are a reference for the study of the intrinsic reactivity of these compounds. Thiolysis of triesters was much slower and less favorable than the corresponding alcoholysis. Triesters reacted through an associative mechanism. Monoesters can react by both associative and dissociative mechanisms. The basicity of the attacking and leaving groups and the possibility of proton transfers can modulate the reaction mechanisms. Intermediates formed along associative reactions did not follow empirically proposed rules for ligand positioning. Our calculations also allow re-interpretation of some experimental results, and new experiments are proposed to trace reactions that are normally not observed, both in the gas phase and in solution.

  13. Impact of rock mass temperature on potential power and electricity generation in the ORC installation

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, Michał

    2017-11-01

    The basic source of information for determining the temperature distribution in the rock mass and thus the potential for thermal energy contained in geothermal water conversion to electricity, are: temperature measurements in stable geothermic conditions, temperature measurements in unstable conditions, measurements of maximum temperatures at the bottom of the well. Incorrect temperature estimation can lead to errors during thermodynamic parameters calculation and consequently economic viability of the project. The analysis was performed for the geothermal water temperature range of 86-100°C, for dry working fluid R245fa. As a result of the calculations, the data indicate an increase in geothermal power as the geothermal water temperature increases. At 86°C, the potential power is 817.48 kW, increases to 912.20 kW at 88°C and consequently to 1 493.34 kW at 100°C. These results are not surprising, but show a scale of error in assessing the potential that can result improper interpretation of the rock mass and geothermal waters temperature.

  14. Regression analysis for solving diagnosis problem of children's health

    NASA Astrophysics Data System (ADS)

    Cherkashina, Yu A.; Gerget, O. M.

    2016-04-01

    The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.

  15. Specific features of defect and mass transport in concentrated fcc alloys

    DOE PAGES

    Osetsky, Yuri N.; Béland, Laurent K.; Stoller, Roger E.

    2016-06-15

    We report that diffusion and mass transport are basic properties that control materials performance, such as phase stability, solute decomposition and radiation tolerance. While understanding diffusion in dilute alloys is a mature field, concentrated alloys are much less studied. Here, atomic-scale diffusion and mass transport via vacancies and interstitial atoms are compared in fcc Ni, Fe and equiatomic Ni-Fe alloy. High temperature properties were determined using conventional molecular dynamics on the microsecond timescale, whereas the kinetic activation-relaxation (k-ART) approach was applied at low temperatures. The k-ART was also used to calculate transition states in the alloy and defect transport coefficients.more » The calculations reveal several specific features. For example, vacancy and interstitial defects migrate via different alloy components, diffusion is more sluggish in the alloy and, notably, mass transport in the concentrated alloy cannot be predicted on the basis of diffusion in its pure metal counterparts. Lastly, the percolation threshold for the defect diffusion in the alloy is discussed and it is suggested that this phenomenon depends on the properties and diffusion mechanisms of specific defects.« less

  16. Micrometeoroid and Orbital Debris (MMOD) Shield Ballistic Limit Analysis Program

    NASA Technical Reports Server (NTRS)

    Ryan, Shannon

    2013-01-01

    This software implements penetration limit equations for common micrometeoroid and orbital debris (MMOD) shield configurations, windows, and thermal protection systems. Allowable MMOD risk is formulated in terms of the probability of penetration (PNP) of the spacecraft pressure hull. For calculating the risk, spacecraft geometry models, mission profiles, debris environment models, and penetration limit equations for installed shielding configurations are required. Risk assessment software such as NASA's BUMPERII is used to calculate mission PNP; however, they are unsuitable for use in shield design and preliminary analysis studies. The software defines a single equation for the design and performance evaluation of common MMOD shielding configurations, windows, and thermal protection systems, along with a description of their validity range and guidelines for their application. Recommendations are based on preliminary reviews of fundamental assumptions, and accuracy in predicting experimental impact test results. The software is programmed in Visual Basic for Applications for installation as a simple add-in for Microsoft Excel. The user is directed to a graphical user interface (GUI) that requires user inputs and provides solutions directly in Microsoft Excel workbooks.

  17. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  18. Procedure for calculating estimated ultimate recoveries of Bakken and Three Forks Formations horizontal wells in the Williston Basin

    USGS Publications Warehouse

    Cook, Troy A.

    2013-01-01

    Estimated ultimate recoveries (EURs) are a key component in determining productivity of wells in continuous-type oil and gas reservoirs. EURs form the foundation of a well-performance-based assessment methodology initially developed by the U.S. Geological Survey (USGS; Schmoker, 1999). This methodology was formally reviewed by the American Association of Petroleum Geologists Committee on Resource Evaluation (Curtis and others, 2001). The EUR estimation methodology described in this paper was used in the 2013 USGS assessment of continuous oil resources in the Bakken and Three Forks Formations and incorporates uncertainties that would not normally be included in a basic decline-curve calculation. These uncertainties relate to (1) the mean time before failure of the entire well-production system (excluding economics), (2) the uncertainty of when (and if) a stable hyperbolic-decline profile is revealed in the production data, (3) the particular formation involved, (4) relations between initial production rates and a stable hyperbolic-decline profile, and (5) the final behavior of the decline extrapolation as production becomes more dependent on matrix storage.

  19. Computational Science: Ensuring America’s Competitiveness

    DTIC Science & Technology

    2005-06-01

    Supercharging U. S. Innovation & Competitiveness, Washington, D.C. , July 2004. Davies, C. T. H. , et al. , “High-Precision Lattice QCD Confronts Experiment...together to form a class of particles call hadrons (that include protons and neutrons) . For 30 years, researchers in lattice QCD have been trying to use...the basic QCD equations to calculate the properties of hadrons, especially their masses, using numerical lattice gauge theory calculations in order to

  20. Concerning the sound insulation of building elements made up of light concretes. [acoustic absorption efficiency calculations

    NASA Technical Reports Server (NTRS)

    Giurgiu, I. I.

    1974-01-01

    The sound insulating capacity of building elements made up of light concretes is considered. Analyzing differentially the behavior of light concrete building elements under the influence of incident acoustic energy and on the basis of experimental measurements, coefficients of correction are introduced into the basic formulas for calculating the sound insulating capacity for the 100-3,2000 Hz frequency band.

  1. Techniques for the computation in demographic projections of health manpower.

    PubMed

    Horbach, L

    1979-01-01

    Some basic principles and algorithms are presented which can be used for projective calculations of medical staff on the basis of demographic data. The effects of modifications of the input data such as by health policy measures concerning training capacity, can be demonstrated by repeated calculations with assumptions. Such models give a variety of results and may highlight the probable future balance between health manpower supply and requirements.

  2. Activation Product Inverse Calculations with NDI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Mark Girard

    NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.

  3. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    NASA Astrophysics Data System (ADS)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  4. Tool for Sizing Analysis of the Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Yeh, Hue-Hsie Jannivine; Brown, Cheryl B.; Jeng, Frank J.

    2005-01-01

    Advanced Life Support Sizing Analysis Tool (ALSSAT) is a computer model for sizing and analyzing designs of environmental-control and life support systems (ECLSS) for spacecraft and surface habitats involved in the exploration of Mars and Moon. It performs conceptual designs of advanced life support (ALS) subsystems that utilize physicochemical and biological processes to recycle air and water, and process wastes in order to reduce the need of resource resupply. By assuming steady-state operations, ALSSAT is a means of investigating combinations of such subsystems technologies and thereby assisting in determining the most cost-effective technology combination available. In fact, ALSSAT can perform sizing analysis of the ALS subsystems that are operated dynamically or steady in nature. Using the Microsoft Excel spreadsheet software with Visual Basic programming language, ALSSAT has been developed to perform multiple-case trade studies based on the calculated ECLSS mass, volume, power, and Equivalent System Mass, as well as parametric studies by varying the input parameters. ALSSAT s modular format is specifically designed for the ease of future maintenance and upgrades.

  5. A study of performance parameters on drag and heat flux reduction efficiency of combinational novel cavity and opposing jet concept in hypersonic flows

    NASA Astrophysics Data System (ADS)

    Sun, Xi-wan; Guo, Zhen-yun; Huang, Wei; Li, Shi-bin; Yan, Li

    2017-02-01

    The drag reduction and thermal protection system applied to hypersonic re-entry vehicles have attracted an increasing attention, and several novel concepts have been proposed by researchers. In the current study, the influences of performance parameters on drag and heat reduction efficiency of combinational novel cavity and opposing jet concept has been investigated numerically. The Reynolds-average Navier-Stokes (RANS) equations coupled with the SST k-ω turbulence model have been employed to calculate its surrounding flowfields, and the first-order spatially accurate upwind scheme appears to be more suitable for three-dimensional flowfields after grid independent analysis. Different cases of performance parameters, namely jet operating conditions, freestream angle of attack and physical dimensions, are simulated based on the verification of numerical method, and the effects on shock stand-off distance, drag force coefficient, surface pressure and heat flux distributions have been analyzed. This is the basic study for drag reduction and thermal protection by multi-objective optimization of the combinational novel cavity and opposing jet concept in hypersonic flows in the future.

  6. Advanced electric propulsion system concept for electric vehicles

    NASA Technical Reports Server (NTRS)

    Raynard, A. E.; Forbes, F. E.

    1979-01-01

    Seventeen propulsion system concepts for electric vehicles were compared to determine the differences in components and battery pack to achieve the basic performance level. Design tradeoffs were made for selected configurations to find the optimum component characteristics required to meet all performance goals. The anticipated performance when using nickel-zinc batteries rather than the standard lead-acid batteries was also evaluated. The two systems selected for the final conceptual design studies included a system with a flywheel energy storage unit and a basic system that did not have a flywheel. The flywheel system meets the range requirement with either lead-acid or nickel-zinc batteries and also the acceleration of zero to 89 km/hr in 15 s. The basic system can also meet the required performance with a fully charged battery, but, when the battery approaches 20 to 30 percent depth of discharge, maximum acceleration capability gradually degrades. The flywheel system has an estimated life-cycle cost of $0.041/km using lead-acid batteries. The basic system has a life-cycle cost of $0.06/km. The basic system, using batteries meeting ISOA goals, would have a life-cycle cost of $0.043/km.

  7. [Investigation of the accurate measurement of the basic imaging properties for the digital radiographic system based on flat panel detector].

    PubMed

    Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J

    2008-07-20

    PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.

  8. Projection of Teachers' Salaries for Contract Negotiations.

    ERIC Educational Resources Information Center

    Ott, Jack P.

    1982-01-01

    Lists and explains a computer program written in BASIC which calculates teacher salaries using a salary index. Modification of this payroll program is suggested as a student project in schools which teach computer programing. (JJD)

  9. Computer Series, 39.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1983-01-01

    Describes use of Warnier-Orr program design method for preparing general chemistry tutorial on ideal gas calculations. This program (BASIC-PLUS) is available from the author. Also describes a multipurpose computerized class record system at the University of Toledo. (JN)

  10. OpenDrift v1.0: a generic framework for trajectory modelling

    NASA Astrophysics Data System (ADS)

    Dagestad, Knut-Frode; Röhrs, Johannes; Breivik, Øyvind; Ådlandsvik, Bjørn

    2018-04-01

    OpenDrift is an open-source Python-based framework for Lagrangian particle modelling under development at the Norwegian Meteorological Institute with contributions from the wider scientific community. The framework is highly generic and modular, and is designed to be used for any type of drift calculations in the ocean or atmosphere. A specific module within the OpenDrift framework corresponds to a Lagrangian particle model in the traditional sense. A number of modules have already been developed, including an oil drift module, a stochastic search-and-rescue module, a pelagic egg module, and a basic module for atmospheric drift. The framework allows for the ingestion of an unspecified number of forcing fields (scalar and vectorial) from various sources, including Eulerian ocean, atmosphere and wave models, but also measurements or a priori values for the same variables. A basic backtracking mechanism is inherent, using sign reversal of the total displacement vector and negative time stepping. OpenDrift is fast and simple to set up and use on Linux, Mac and Windows environments, and can be used with minimal or no Python experience. It is designed for flexibility, and researchers may easily adapt or write modules for their specific purpose. OpenDrift is also designed for performance, and simulations with millions of particles may be performed on a laptop. Further, OpenDrift is designed for robustness and is in daily operational use for emergency preparedness modelling (oil drift, search and rescue, and drifting ships) at the Norwegian Meteorological Institute.

  11. Interactions of the "piano-stool" [ruthenium(II)(η(6) -arene)(quinolone)Cl](+) complexes with water; DFT computational study.

    PubMed

    Zábojníková, Tereza; Cajzl, Radim; Kljun, Jakob; Chval, Zdeněk; Turel, Iztok; Burda, Jaroslav V

    2016-07-15

    Full optimizations of stationary points along the reaction coordinate for the hydration of several quinolone Ru(II) half-sandwich complexes were performed in water environment using the B3PW91/6-31+G(d)/PCM/UAKS method. The role of diffuse functions (especially on oxygen) was found crucial for correct geometries along the reaction coordinate. Single-point (SP) calculations were performed at the B3LYP/6-311++G(2df,2pd)/DPCM/saled-UAKS level. In the first part, two possible reaction mechanisms-associative and dissociative were compared. It was found that the dissociative mechanism of the hydration process is kinetically slightly preferred. Another important conclusion concerns the reaction channels. It was found that substitution of chloride ligand (abbreviated in the text as dechlorination reaction) represents energetically and kinetically the most feasible pathway. In the second part the same hydration reaction was explored for reactivity comparison of the Ru(II)-complexes with several derivatives of nalidixic acid: cinoxacin, ofloxacin, and (thio)nalidixic acid. The hydration process is about four orders of magnitude faster in a basic solution compared to neutral/acidic environment with cinoxacin and nalidixic acid as the most reactive complexes in the former and latter environments, respectively. The explored hydration reaction is in all cases endergonic; nevertheless the endergonicity is substantially lower (by ∼6 kcal/mol) in basic environment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Monitoring binding affinity between drug and α1-acid glycoprotein in real time by Venturi easy ambient sonic-spray ionization mass spectrometry.

    PubMed

    Liu, Ning; Lu, Xin; Yang, YuHan; Yao, Chen Xi; Ning, BaoMing; He, Dacheng; He, Lan; Ouyang, Jin

    2015-10-01

    A new approach for monitoring the binding affinity between drugs and alpha 1-acid glycoprotein in real time was developed based on a combination of drug-protein reaction followed by Venturi easy ambient sonic-spray ionization mass spectrometry determination of the free drug concentrations. A known basic drug, propranolol was used to validate the new built method. Binding constant values calculated by venturi easy ambient sonic-spray ionization mass spectrometry was in good accordance with a traditional ultrafiltration combined with high performance liquid chromatography method. Then six types of basic drugs were used as the samples to conduct the real time analysis. Upon injection of alpha 1-acid glycoprotein to the drug mixture, the ion chromatograms were extracted to show the changes in the free drug concentrations in real time. By observing the drop-out of six types of drugs during the whole binding reaction, the binding affinities of different drugs were distinguished. A volume shift validating experiment and an injection delay correcting experiment were also performed to eliminate extraneous factors and verify the reliability of our experiment. Therefore, the features of Venturi easy ambient sonic-spray ionization mass spectrometry (V-EASI-MS) and the experimental results indicate that our technique is likely to become a powerful tool for monitoring drug-AGP binding affinity in real time. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Productive and Participatory: Basic Education for High-Performing and Actively Engaged Workers

    ERIC Educational Resources Information Center

    Jurmo, Paul

    2010-01-01

    The adult basic education field in the United States has experienced an ebb and flow of interest and investment in "worker education" over the past three decades. Although the rhetoric around workplace basic skills tends to focus on such outcomes as productivity and competitiveness, some proponents of worker basic education see it as a…

  14. Can households earning minimum wage in Nova Scotia afford a nutritious diet?

    PubMed

    Williams, Patricia L; Johnson, Christine P; Kratzmann, Meredith L V; Johnson, C Shanthi Jacob; Anderson, Barbara J; Chenhall, Cathy

    2006-01-01

    To assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia. Food costing data were collected in 43 randomly selected grocery stores throughout NS in 2002 using the National Nutritious Food Basket (NNFB). To estimate the affordability of a nutritious diet for households earning minimum wage, average monthly costs for essential expenses were subtracted from overall income to see if enough money remained for the cost of the NNFB. This was calculated for three types of household: 1) two parents and two children; 2) lone parent and two children; and 3) single male. Calculations were also made for the proposed 2006 minimum wage increase with expenses adjusted using the Consumer Price Index (CPI). The monthly cost of the NNFB priced in 2002 for the three types of household was 572.90 dollars, 351.68 dollars, and 198.73 dollars, respectively. Put into the context of basic living, these data showed that Nova Scotians relying on minimum wage could not afford to purchase a nutritious diet and meet their basic needs, placing their health at risk. These basic expenses do not include other routine costs, such as personal hygiene products, household and laundry cleaners, and prescriptions and costs associated with physical activity, education or savings for unexpected expenses. People working at minimum wage in Nova Scotia have not had adequate income to meet basic needs, including a nutritious diet. The 2006 increase in minimum wage to 7.15 dollars/hr is inadequate to ensure that Nova Scotians working at minimum wage are able to meet these basic needs. Wage increases and supplements, along with supports for expenses such as childcare and transportation, are indicated to address this public health problem.

  15. Performance evaluation of Al-Zahra academic medical center based on Iran balanced scorecard model.

    PubMed

    Raeisi, Ahmad Reza; Yarmohammadian, Mohammad Hossein; Bakhsh, Roghayeh Mohammadi; Gangi, Hamid

    2012-01-01

    Growth and development in any country's national health system, without an efficient evaluation system, lacks the basic concepts and tools necessary for fulfilling the system's goals. The balanced scorecard (BSC) is a technique widely used to measure the performance of an organization. The basic core of the BSC is guided by the organization's vision and strategies, which are the bases for the formation of four perspectives of BSC. The goal of this research is the performance evaluation of Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences, based on Iran BSC model. This is a combination (quantitative-qualitative) research which was done at Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences in 2011. The research populations were hospital managers at different levels. Sampling method was purposive sampling in which the key informed personnel participated in determining the performance indicators of hospital as the BSC team members in focused discussion groups. After determining the conceptual elements in focused discussion groups, the performance objectives (targets) and indicators of hospital were determined and sorted in perspectives by the group discussion participants. Following that, the performance indicators were calculated by the experts according to the predetermined objectives; then, the score of each indicator and the mean score of each perspective were calculated. Research findings included development of the organizational mission, vision, values, objectives, and strategies. The strategies agreed upon by the participants in the focus discussion group included five strategies, which were customer satisfaction, continuous quality improvement, development of human resources, supporting innovation, expansion of services and improving the productivity. Research participants also agreed upon four perspectives for the Al-Zahra hospital BSC. In the patients and community perspective (customer), two objectives and three indicators were agreed upon, with a mean score of 75.9%. In the internal process perspective, 4 objectives and 14 indicators were agreed upon, with a mean score of 79.37%. In the learning and growth perspective, four objectives and eight indicators were agreed upon, with a mean score of 81.11%. Finally, in the financial perspective, two objectives and five indicators were agreed upon, with a mean score of 67.15%. One way to create demand for hospital services is performance evaluation by paying close attention to all BSC perspectives, especially the non-financial perspectives such as customers and internal processes perspectives. In this study, the BSC showed the differences in performance level of the organization in different perspectives, which would assist the hospital managers improve their performance indicators. The learning and growth perspective obtained the highest score, and the financial perspective obtained the least score. Since the learning and growth perspective acts as a base for all other perspectives and they depend on it, hospitals must continuously improve the service processes and the quality of services by educating staff and updating their policies and procedures. This can increase customer satisfaction and productivity and finally improve the BSC in financial perspective.

  16. 10 CFR Appendix B to Part 73 - General Criteria for Security Personnel

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... or pass an equivalent performance examination designed to measure basic job-related mathematical... equivalent performance examination designed to measure basic mathematical, language, and reasoning skills... administered by a licensed physician. The examination shall be designed to measure the individual's physical...

  17. 40 CFR 60.141 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Emissions from Basic Oxygen... A of this part. (a) Basic oxygen process furnace (BOPF) means any furnace with a refractory lining... additions into a vessel and introducing a high volume of oxygen-rich gas. Open hearth, blast, and...

  18. 40 CFR 60.141 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Emissions from Basic Oxygen... A of this part. (a) Basic oxygen process furnace (BOPF) means any furnace with a refractory lining... additions into a vessel and introducing a high volume of oxygen-rich gas. Open hearth, blast, and...

  19. 40 CFR 60.141 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Emissions from Basic Oxygen... A of this part. (a) Basic oxygen process furnace (BOPF) means any furnace with a refractory lining... additions into a vessel and introducing a high volume of oxygen-rich gas. Open hearth, blast, and...

  20. 40 CFR 60.141 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Emissions from Basic Oxygen... A of this part. (a) Basic oxygen process furnace (BOPF) means any furnace with a refractory lining... additions into a vessel and introducing a high volume of oxygen-rich gas. Open hearth, blast, and...

  1. 40 CFR 60.141 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Emissions from Basic Oxygen... A of this part. (a) Basic oxygen process furnace (BOPF) means any furnace with a refractory lining... additions into a vessel and introducing a high volume of oxygen-rich gas. Open hearth, blast, and...

  2. Sheetmetal. Performance Objectives. Basic Course.

    ERIC Educational Resources Information Center

    Murwin, Roland

    Several intermediate performance objectives and corresponding criterion measures are listed for each of six terminal objectives for a basic high school sheetmetal work course. The titles of the terminal objectives are Orientation, Shop Machinery and Material, Soldering, Measurements and Layouts, Assigned Shop Projects, and Radial and Triangulation…

  3. 47 CFR 76.611 - Cable television basic signal leakage performance criteria.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television basic signal leakage performance criteria. 76.611 Section 76.611 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.611 Cable...

  4. A study of the academic performance of medical students in the comprehensive examination of the basic sciences according to the indices of emotional intelligence and educational status.

    PubMed

    Moslehi, Mohsen; Samouei, Rahele; Tayebani, Tayebeh; Kolahduz, Sima

    2015-01-01

    Considering the increasing importance of emotional intelligence (EI) in different aspects of life, such as academic achievement, the present survey is aimed to predict academic performance of medical students in the comprehensive examination of the basic sciences, according to the indices of emotional intelligence and educational status. The present survey is a descriptive, analytical, and cross-sectional study performed on the medical students of Isfahan, Tehran, and Mashhad Universities of Medical Sciences. Sampling the universities was performed randomly after which selecting the students was done, taking into consideration the limitation in their numbers. Based on the inclusion criteria, all the medical students, entrance of 2005, who had attended the comprehensive basic sciences examination in 2008, entered the study. The data collection tools included an Emotional Intelligence Questionnaire (standardized in Isfahan), the average score of the first to fifth semesters, total average of each of the five semesters, and the grade of the comprehensive basic sciences examination. The data were analyzed through stepwise regression coefficient by SPSS software version 15. The results indicated that the indicators of independence from an emotional intelligence test and average scores of the first and third academic semesters were significant in predicting the students' academic performance in the comprehensive basic sciences examination. According to the obtained results, the average scores of students, especially in the earlier semesters, as well as the indicators of independence and the self-esteem rate of students can influence their success in the comprehensive basic sciences examination.

  5. A method for estimating fall adult sex ratios from production and survival data

    USGS Publications Warehouse

    Wight, H.M.; Heath, R.G.; Geis, A.D.

    1965-01-01

    This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.

  6. Concept and analytical basis for revistas - A fast, flexible computer/graphic system for generating periodic satellite coverage patterns

    NASA Technical Reports Server (NTRS)

    King, J. C.

    1976-01-01

    The generation of satellite coverage patterns is facilitated by three basic strategies: use of a simplified physical model, permitting rapid closed-form calculation; separation of earth rotation and nodal precession from initial geometric analyses; and use of symmetries to construct traces of indefinite length by repetitive transposition of basic one-quadrant elements. The complete coverage patterns generated consist of a basic nadir trace plus a number of associated off-nadir traces, one for each sensor swath edge to be delineated. Each trace is generated by transposing one or two of the basic quadrant elements into a circle on a nonrotating earth model sphere, after which the circle is expanded into the actual 'helical' pattern by adding rotational displacements to the longitude coordinates. The procedure adapts to the important periodic coverage cases by direct insertion of the characteristic integers N and R (days and orbital revolutions, respectively, per coverage period).

  7. Selective excitation of tropical atmospheric waves in wave-CISK: The effect of vertical wind shear

    NASA Technical Reports Server (NTRS)

    Zhang, Minghua; Geller, Marvin A.

    1994-01-01

    The growth of waves and the generation of potential energy in wave-CISK require unstable waves to tilt with height oppositely to their direction of propagation. This makes the structures and instability properties of these waves very sensitive to the presence of vertical shear in the basic flow. Equatorial Kelvin and Rossby-gravity waves have opposite phase tilt with height to what they have in the stratosphere, and their growth is selectively favored by basic flows with westward vertical shear and eastward vertical shear, respectively. Similar calculations are also made for gravity waves and Rossby waves. It is shown that eastward vertical shear of the basic flow promotes CISK for westward propagating Rossby-gravity, Rossby, and gravity waves and suppresses CISK for eastward propagating Kelvin and gravity waves, while westward shear of the basic flow has the reverse effects.

  8. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  9. 26 CFR 1.41-5A - Basic research for taxable years beginning before January 1, 1987.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... was for basic research performed in the United States). (2) Research in the social sciences or humanities. Basic research does not include research in the social sciences or humanities, within the meaning...

  10. 26 CFR 1.41-5A - Basic research for taxable years beginning before January 1, 1987.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... was for basic research performed in the United States). (2) Research in the social sciences or humanities. Basic research does not include research in the social sciences or humanities, within the meaning...

  11. 26 CFR 1.41-5A - Basic research for taxable years beginning before January 1, 1987.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... was for basic research performed in the United States). (2) Research in the social sciences or humanities. Basic research does not include research in the social sciences or humanities, within the meaning...

  12. 26 CFR 1.41-5A - Basic research for taxable years beginning before January 1, 1987.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... was for basic research performed in the United States). (2) Research in the social sciences or humanities. Basic research does not include research in the social sciences or humanities, within the meaning...

  13. 26 CFR 1.41-5A - Basic research for taxable years beginning before January 1, 1987.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... was for basic research performed in the United States). (2) Research in the social sciences or humanities. Basic research does not include research in the social sciences or humanities, within the meaning...

  14. Spectral binning for energy production calculations and multijunction solar cell design

    DOE PAGES

    Garcia, Iván; McMahon, William E.; Habte, Aron; ...

    2017-09-14

    Currently, most solar cells are designed for and evaluated under standard spectra intended to represent typical spectral conditions. However, no single spectrum can capture the spectral variability needed for annual energy production (AEP) calculations, and this shortcoming becomes more significant for series-connected multijunction cells as the number of junctions increases. For this reason, AEP calculations are often performed on very detailed yearlong sets of data, but these pose 2 inherent challenges: (1) These data sets comprise thousands of data points, which appear as a scattered cloud of data when plotted against typical parameters and are hence cumbersome to classify andmore » compare, and (2) large sets of spectra bring with them a corresponding increase in computation or measurement time. Here, we show how a large spectral set can be reduced to just a few 'proxy' spectra, which still retain the spectral variability information needed for AEP design and evaluation. The basic 'spectral binning' methods should be extensible to a variety of multijunction device architectures. In this study, as a demonstration, the AEP of a 4-junction device is computed for both a full set of spectra and a reduced proxy set, and the results show excellent agreement for as few as 3 proxy spectra. This enables much faster (and thereby more detailed) calculations and indoor measurements and provides a manageable way to parameterize a spectral set, essentially creating a 'spectral fingerprint,' which should facilitate the understanding and comparison of different sites.« less

  15. Spectral binning for energy production calculations and multijunction solar cell design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Iván; McMahon, William E.; Habte, Aron

    Currently, most solar cells are designed for and evaluated under standard spectra intended to represent typical spectral conditions. However, no single spectrum can capture the spectral variability needed for annual energy production (AEP) calculations, and this shortcoming becomes more significant for series-connected multijunction cells as the number of junctions increases. For this reason, AEP calculations are often performed on very detailed yearlong sets of data, but these pose 2 inherent challenges: (1) These data sets comprise thousands of data points, which appear as a scattered cloud of data when plotted against typical parameters and are hence cumbersome to classify andmore » compare, and (2) large sets of spectra bring with them a corresponding increase in computation or measurement time. Here, we show how a large spectral set can be reduced to just a few 'proxy' spectra, which still retain the spectral variability information needed for AEP design and evaluation. The basic 'spectral binning' methods should be extensible to a variety of multijunction device architectures. In this study, as a demonstration, the AEP of a 4-junction device is computed for both a full set of spectra and a reduced proxy set, and the results show excellent agreement for as few as 3 proxy spectra. This enables much faster (and thereby more detailed) calculations and indoor measurements and provides a manageable way to parameterize a spectral set, essentially creating a 'spectral fingerprint,' which should facilitate the understanding and comparison of different sites.« less

  16. Acquisition and retention of basic life support skills in an untrained population using a personal resuscitation manikin and video self-instruction (VSI).

    PubMed

    Nielsen, Anne Møller; Henriksen, Mikael J V; Isbye, Dan Lou; Lippert, Freddy K; Rasmussen, Lars Simon

    2010-09-01

    Video-based self-instruction (VSI) with a 24-min DVD and a personal resuscitation manikin solves some of the barriers associated with traditional basic life support (BLS) courses. No accurate assessment of the actual improvement in skills after attending a VSI course has been determined, and in this study we assess the skill improvement in laypersons undergoing VSI. The BLS skills of 68 untrained laypersons (high school students, their teachers and persons excluded from mainstream society) were assessed using the Laerdal ResusciAnne and PC Skill Reporting System 2.0 in a 3 min test. A total score (12-48 points) was calculated and 12 different variables were recorded. The participants attended a 24-min VSI course (MiniAnne, Laerdal) and took home the DVD and manikin for optional subsequent self-training. We repeated the test 3 1/2-4 months later. There was a significant increase in the total score (p<0.0001) from 26.5 to 34 points. The participants performed significantly better in checking responsiveness, opening the airway, checking for respiration and using the correct compression/ventilation ratio (all p-values<0.001). The compression depth improved from 38 mm to 49.5 mm and the total number of compressions increased from 67 to 141. The ventilation volume and the total number of ventilations increased, and total "hands-off" time decreased from 120.5 s to 85 s. Untrained laypersons attending a 24 min DVD-based BLS course have a significantly improved BLS performance after 3 1/2-4 months compared to pre-test skill performance. Especially the total number of compressions improved and the hands-off time decreased. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Instability of a solidifying binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    An analysis is performed on the stability of a solidifying binary mixture due to surface tension variation of the free liquid surface. The basic state solution is obtained numerically as a nonstationary function of time. Due to the time dependence of the basic state, the stability analysis is of the global type which utilizes a variational technique. Also due to the fact that the basic state is a complex function of both space and time, the stability analysis is performed through numerical means.

  18. The development of response surface pathway design to reduce animal numbers in toxicity studies

    PubMed Central

    2014-01-01

    Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560

  19. Note: a 4 ns hardware photon correlator based on a general-purpose field-programmable gate array development board implemented in a compact setup for fluorescence correlation spectroscopy.

    PubMed

    Kalinin, Stanislav; Kühnemuth, Ralf; Vardanyan, Hayk; Seidel, Claus A M

    2012-09-01

    We present a fast hardware photon correlator implemented in a field-programmable gate array (FPGA) combined with a compact confocal fluorescence setup. The correlator has two independent units with a time resolution of 4 ns while utilizing less than 15% of a low-end FPGA. The device directly accepts transistor-transistor logic (TTL) signals from two photon counting detectors and calculates two auto- or cross-correlation curves in real time. Test measurements demonstrate that the performance of our correlator is comparable with the current generation of commercial devices. The sensitivity of the optical setup is identical or even superior to current commercial devices. The FPGA design and the optical setup both allow for a straightforward extension to multi-color applications. This inexpensive and compact solution with a very good performance can serve as a versatile platform for uses in education, applied sciences, and basic research.

  20. Basic math in monkeys and college students.

    PubMed

    Cantlon, Jessica F; Brannon, Elizabeth M

    2007-12-01

    Adult humans possess a sophisticated repertoire of mathematical faculties. Many of these capacities are rooted in symbolic language and are therefore unlikely to be shared with nonhuman animals. However, a subset of these skills is shared with other animals, and this set is considered a cognitive vestige of our common evolutionary history. Current evidence indicates that humans and nonhuman animals share a core set of abilities for representing and comparing approximate numerosities nonverbally; however, it remains unclear whether nonhuman animals can perform approximate mental arithmetic. Here we show that monkeys can mentally add the numerical values of two sets of objects and choose a visual array that roughly corresponds to the arithmetic sum of these two sets. Furthermore, monkeys' performance during these calculations adheres to the same pattern as humans tested on the same nonverbal addition task. Our data demonstrate that nonverbal arithmetic is not unique to humans but is instead part of an evolutionarily primitive system for mathematical thinking shared by monkeys.

  1. Measurements of Shear Lift Force on a Bubble in Channel Flow in Microgravity

    NASA Technical Reports Server (NTRS)

    Nahra, Henry K.; Motil, Brian J.; Skor, Mark

    2003-01-01

    Under microgravity conditions, the shear lift force acting on bubbles, droplets or solid particles in multiphase flows becomes important because under normal gravity, this hydrodynamic force is masked by buoyancy. This force plays an important role in furnishing the detachment process of bubbles in a setting where a bubble suspension is needed in microgravity. In this work, measurements of the shear lift force acting on a bubble in channel flow are performed. The shear lift force is deduced from the bubble kinematics using scaling and then compared with predictions from models in literature that address different asymptotic and numerical solutions. Basic trajectory calculations are then performed and the results are compared with experimental data of position of the bubble in the channel. A direct comparison of the lateral velocity of the bubbles is also made with the lateral velocity prediction from investigators, whose work addressed the shear lift on a sphere in different two-dimensional shear flows including Poiseuille flow.

  2. Note: A 4 ns hardware photon correlator based on a general-purpose field-programmable gate array development board implemented in a compact setup for fluorescence correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Kalinin, Stanislav; Kühnemuth, Ralf; Vardanyan, Hayk; Seidel, Claus A. M.

    2012-09-01

    We present a fast hardware photon correlator implemented in a field-programmable gate array (FPGA) combined with a compact confocal fluorescence setup. The correlator has two independent units with a time resolution of 4 ns while utilizing less than 15% of a low-end FPGA. The device directly accepts transistor-transistor logic (TTL) signals from two photon counting detectors and calculates two auto- or cross-correlation curves in real time. Test measurements demonstrate that the performance of our correlator is comparable with the current generation of commercial devices. The sensitivity of the optical setup is identical or even superior to current commercial devices. The FPGA design and the optical setup both allow for a straightforward extension to multi-color applications. This inexpensive and compact solution with a very good performance can serve as a versatile platform for uses in education, applied sciences, and basic research.

  3. Digital 3D Microstructure Analysis of Concrete using X-Ray Micro Computed Tomography SkyScan 1173: A Preliminary Study

    NASA Astrophysics Data System (ADS)

    Latief, F. D. E.; Mohammad, I. H.; Rarasati, A. D.

    2017-11-01

    Digital imaging of a concrete sample using high resolution tomographic imaging by means of X-Ray Micro Computed Tomography (μ-CT) has been conducted to assess the characteristic of the sample’s structure. A standard procedure of image acquisition, reconstruction, image processing of the method using a particular scanning device i.e., the Bruker SkyScan 1173 High Energy Micro-CT are elaborated. A qualitative and a quantitative analysis were briefly performed on the sample to deliver some basic ideas of the capability of the system and the bundled software package. Calculation of total VOI volume, object volume, percent of object volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity were conducted and analysed. This paper should serve as a brief description of how the device can produce the preferred image quality as well as the ability of the bundled software packages to help in performing qualitative and quantitative analysis.

  4. Computational modeling of latent-heat-storage in PCM modified interior plaster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fořt, Jan; Maděra, Jiří; Trník, Anton

    2016-06-08

    The latent heat storage systems represent a promising way for decrease of buildings energy consumption with respect to the sustainable development principles of building industry. The presented paper is focused on the evaluation of the effect of PCM incorporation on thermal performance of cement-lime plasters. For basic characterization of the developed materials, matrix density, bulk density, and total open porosity are measured. Thermal conductivity is accessed by transient impulse method. DSC analysis is used for the identification of phase change temperature during the heating and cooling process. Using DSC data, the temperature dependent specific heat capacity is calculated. On themore » basis of the experiments performed, the supposed improvement of the energy efficiency of characteristic building envelope system where the designed plasters are likely to be used is evaluated by a computational analysis. Obtained experimental and computational results show a potential of PCM modified plasters for improvement of thermal stability of buildings and moderation of interior climate.« less

  5. Power efficient, clock gated multiplexer based full adder cell using 28 nm technology

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep

    2016-03-01

    Clock gating is a leading technique used for power saving. Full adders is one of the basic circuit that can be found in maximum VLSI circuits. In this paper clock gated multiplexer based full adder cell is implemented on 28 nm technology. We have designed a full adder cell using a multiplexer with a gated clock without degrading its performance of the cell. We have negative latch circuit for generating gated clock. This gated clock is used to control the multiplexer based full adder cell. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. The total power of the circuit has been reduced by 7.41% without degrading the performance of original circuit. The power has been calculated using XPower Analyzer tool of XILINX ISE DESIGN SUITE 14.3.

  6. Printing. Performance Objectives. Basic Course.

    ERIC Educational Resources Information Center

    Seivert, Chester

    Several intermediate performance objectives and corresponding criterion measures are listed for each of 17 terminal objectives for a secondary level basic printing course. The materials were developed for a two-semester (2 hours daily) course with specialized classroom and shop experiences designed to enable the student to develop basic…

  7. 75 FR 32943 - Food and Drug Administration Modernization Act of 1997: Modifications to the List of Recognized...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-10

    ... basic safety and essential performance--Collateral standard: Electromagnetic compatibility--Requirements... standard: Electromagnetic compatibility--Requirements and tests 5-34 5-53 IEC 60601-1-2 Third edition 2007... for basic safety and essential performance--Collateral standard: Electromagnetic compatibility...

  8. Coupled rotor-body vibrations with inplane degrees of freedom

    NASA Technical Reports Server (NTRS)

    Ming-Sheng, H.; Peters, D. A.

    1985-01-01

    In an effort to understand the vibration mechanisms of helicopters, the following basic studies are considered. A coupled rotor-fuselage vibration analysis including inplane degrees of freedom of both rotor and airframe is performed by matching of rotor and fuselage impedances at the hub. A rigid blade model including hub motion is used to set up the rotor flaplag equations. For the airframe, 9 degrees of freedom and hub offsets are used. The equations are solved by harmonic balance. For a 4-bladed rotor, the coupled responses and hub loads are calculated for various parameters in forward flight. The results show that the addition of inplane degrees of freedom does not significantly affect the vertical vibrations for the cases considered, and that inplane vibrations have similar resonance trends as do flapping vibrations.

  9. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  10. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1994-01-01

    New calculations of the acoustic wave energy fluxes generated in the solar convective zone have been performed. The treatment of convective turbulence in the sun and solar-like stars, in particular, the precise nature of the turbulent power spectrum has been recognized as one of the most important issues in the wave generation problem. Several different functional forms for spatial and temporal spectra have been considered in the literature and differences between the energy fluxes obtained for different forms often exceed two orders of magnitude. The basic criterion for choosing the appropriate spectrum was the maximal efficiency of the wave generation. We have used a different approach based on physical and empirical arguments as well as on some results from numerical simulation of turbulent convection.

  11. Multiscale simulations of the early stages of the growth of graphene on copper

    NASA Astrophysics Data System (ADS)

    Gaillard, P.; Chanier, T.; Henrard, L.; Moskovkin, P.; Lucas, S.

    2015-07-01

    We have performed multiscale simulations of the growth of graphene on defect-free copper (111) in order to model the nucleation and growth of graphene flakes during chemical vapour deposition and potentially guide future experimental work. Basic activation energies for atomic surface diffusion were determined by ab initio calculations. Larger scale growth was obtained within a kinetic Monte Carlo approach (KMC) with parameters based on the ab initio results. The KMC approach counts the first and second neighbours to determine the probability of surface diffusion. We report qualitative results on the size and shape of the graphene islands as a function of deposition flux. The dominance of graphene zigzag edges for low deposition flux, also observed experimentally, is explained by its larger dynamical stability that the present model fully reproduced.

  12. Horizontal vectorization of electron repulsion integrals.

    PubMed

    Pritchard, Benjamin P; Chow, Edmond

    2016-10-30

    We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD  = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Transport of a high brightness proton beam through the Munich tandem accelerator

    NASA Astrophysics Data System (ADS)

    Moser, M.; Greubel, C.; Carli, W.; Peeper, K.; Reichart, P.; Urban, B.; Vallentin, T.; Dollinger, G.

    2015-04-01

    Basic requirement for ion microprobes with sub-μm beam focus is a high brightness beam to fill the small phase space usually accepted by the ion microprobe with enough ion current for the desired application. We performed beam transport simulations to optimize beam brightness transported through the Munich tandem accelerator. This was done under the constraint of a maximum ion current of 10 μA that is allowed to be injected due to radiation safety regulations and beam power constrains. The main influence of the stripper foil in conjunction with intrinsic astigmatism in the beam transport on beam brightness is discussed. The calculations show possibilities for brightness enhancement by using astigmatism corrections and asymmetric filling of the phase space volume in the x- and y-direction.

  14. Electron Optics Cannot Be Taught through Computation?

    ERIC Educational Resources Information Center

    van der Merwe, J. P.

    1980-01-01

    Describes how certain concepts basic to electron optics may be introduced to undergraduate physics students by calculating trajectories of charged particles through electrostatic fields which can be evaluated on minicomputers with a minimum of programing effort. (Author/SA)

  15. 49 CFR Appendix A to Part 531 - Example of Calculating Compliance Under § 531.5(c)

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE... automobiles in MY 2012 as follows: Appendix A, Table 1 Model type Group Carline name Basic engine(L...

  16. 49 CFR Appendix A to Part 531 - Example of Calculating Compliance Under § 531.5(c)

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE... automobiles in MY 2012 as follows: Appendix A, Table 1 Model type Group Carline name Basic engine(L...

  17. Fast calculation of the sensitivity matrix in magnetic induction tomography by tetrahedral edge finite elements and the reciprocity theorem.

    PubMed

    Hollaus, K; Magele, C; Merwa, R; Scharfetter, H

    2004-02-01

    Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.

  18. Molecular modelling of protein-protein/protein-solvent interactions

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler

    The inner workings of individual cells are based on intricate networks of protein-protein interactions. However, each of these individual protein interactions requires a complex physical interaction between proteins and their aqueous environment at the atomic scale. In this thesis, molecular dynamics simulations are used in three theoretical studies to gain insight at the atomic scale about protein hydration, protein structure and tubulin-tubulin (protein-protein) interactions, as found in microtubules. Also presented, in a fourth project, is a molecular model of solvation coupled with the Amber molecular modelling package, to facilitate further studies without the need of explicitly modelled water. Basic properties of a minimally solvated protein were calculated through an extended study of myoglobin hydration with explicit solvent, directly investigating water and protein polarization. Results indicate a close correlation between polarization of both water and protein and the onset of protein function. The methodology of explicit solvent molecular dynamics was further used to study tubulin and microtubules. Extensive conformational sampling of the carboxy-terminal tails of 8-tubulin was performed via replica exchange molecular dynamics, allowing the characterisation of the flexibility, secondary structure and binding domains of the C-terminal tails through statistical analysis methods. Mechanical properties of tubulin and microtubules were calculated with adaptive biasing force molecular dynamics. The function of the M-loop in microtubule stability was demonstrated in these simulations. The flexibility of this loop allowed constant contacts between the protofilaments to be maintained during simulations while the smooth deformation provided a spring-like restoring force. Additionally, calculating the free energy profile between the straight and bent tubulin configurations was used to test the proposed conformational change in tubulin, thought to cause microtubule destabilization. No conformational change was observed but a nucleotide dependent 'softening' of the interaction was found instead, suggesting that an entropic force in a microtubule configuration could be the mechanism of microtubule collapse. Finally, to overcome much of the computational costs associated with explicit soIvent calculations, a new combination of molecular dynamics with the 3D-reference interaction site model (3D-RISM) of solvation was integrated into the Amber molecular dynamics package. Our implementation of 3D-RISM shows excellent agreement with explicit solvent free energy calculations. Several optimisation techniques, including a new multiple time step method, provide a nearly 100 fold performance increase, giving similar computational performance to explicit solvent.

  19. Effects of tissue conductivity and electrode area on internal electric fields in a numerical human model for ELF contact current exposures

    NASA Astrophysics Data System (ADS)

    Tarao, H.; Kuisti, H.; Korpinen, L.; Hayashi, N.; Isaka, K.

    2012-05-01

    Contact currents flow through the human body when a conducting object with different potential is touched. There are limited reports on numerical dosimetry for contact current exposure compared with electromagnetic field exposures. In this study, using an anatomical human adult male model, we performed numerical calculation of internal electric fields resulting from 60 Hz contact current flowing from the left hand to the left foot as a basis case. Next, we performed a variety of similar calculations with varying tissue conductivity and contact area, and compared the results with the basis case. We found that very low conductivity of skin and a small electrode size enhanced the internal fields in the muscle, subcutaneous fat and skin close to the contact region. The 99th percentile value of the fields in a particular tissue type did not reliably account for these fields near the electrode. In the arm and leg, the internal fields for the muscle anisotropy were identical to those in the isotropy case using a conductivity value longitudinal to the muscle fibre. Furthermore, the internal fields in the tissues abreast of the joints such as the wrist and the elbow, including low conductivity tissues, as well as the electrode contact region, exceeded the ICNIRP basic restriction for the general public with contact current as the reference level value.

  20. Two dimensional numerical prediction of deflagration-to-detonation transition in porous energetic materials.

    PubMed

    Narin, B; Ozyörük, Y; Ulas, A

    2014-05-30

    This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. Copyright © 2014 Elsevier B.V. All rights reserved.

Top