Science.gov

Sample records for additional computing power

  1. Calculators and Computers: Graphical Addition.

    ERIC Educational Resources Information Center

    Spero, Samuel W.

    1978-01-01

    A computer program is presented that generates problem sets involving sketching graphs of trigonometric functions using graphical addition. The students use calculators to sketch the graphs and a computer solution is used to check it. (MP)

  2. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  3. computePk: Power spectrum computation

    NASA Astrophysics Data System (ADS)

    L'Huillier, Benjamin

    2014-03-01

    ComputePk computes the power spectrum in cosmological simulations. It is MPI parallel and has been tested up to a 4096^3 mesh. It uses the FFTW library. It can read Gadget-3 and GOTPM outputs, and computes the dark matter component. The user may choose between NGP, CIC, and TSC for the mass assignment scheme.

  4. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  5. Power consumption monitoring using additional monitoring device

    SciTech Connect

    Truşcă, M. R. C. Albert, Ş. Tudoran, C. Soran, M. L. Fărcaş, F.; Abrudean, M.

    2013-11-13

    Today, emphasis is placed on reducing power consumption. Computers are large consumers; therefore it is important to know the total consumption of computing systems. Since their optimal functioning requires quite strict environmental conditions, without much variation in temperature and humidity, reducing energy consumption cannot be made without monitoring environmental parameters. Thus, the present work uses a multifunctional electric meter UPT 210 for power consumption monitoring. Two applications were developed: software which carries meter readings provided by electronic and programming facilitates remote device and a device for temperature monitoring and control. Following temperature variations that occur both in the cooling system, as well as the ambient, can reduce energy consumption. For this purpose, some air conditioning units or some computers are stopped in different time slots. These intervals were set so that the economy is high, but the work's Datacenter is not disturbed.

  6. Monotonic Weighted Power Transformations to Additivity

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1977-01-01

    A class of monotonic transformations which generalize the power transformation is fit to the independent and dependent variables in multiple regression so that the resulting additive relationship is optimized. Examples of analysis of real and artificial data are presented. (Author/JKS)

  7. World's Most Powerful Computer

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The use of the Cray 2 supercomputer, the fastest computer in the world, at ARC is detailed. The Cray 2 can perform 250 million calculations per second and has 10 times the memory of any other computer. Ames researchers are shown creating computer simulations of aircraft airflow, waterflow around a submarine, and fuel flow inside of the Space Shuttle's engines. The video also details the Cray 2's use in calculating airflow around the Shuttle and its external rockets during liftoff for the first time and in the development of the National Aero Space Plane.

  8. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  9. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  10. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  11. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  12. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  13. Control and Power in Educational Computing.

    ERIC Educational Resources Information Center

    Kahn, Peter H., Jr.; Friedman, Batya

    Educational computing based on the primacy of human agency is explored, considering ways in which power can be apportioned and exercised in order to enhance educational computing. Ideas about power and control are situated epistemologically. A first consideration is educating for human control of computer technology. Research suggests that…

  14. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  15. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  16. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  17. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  18. Desktop Computing Power--Issues and Opportunities.

    ERIC Educational Resources Information Center

    Nelson, Therese A.; Porter, James H.

    1991-01-01

    This article explores the thesis that colleges and universities can leverage desktop computing power to address campus administration needs. It describes a study of administrative computing needs at the University of Chicago; identifies roadblocks to effective use of desktop computing, such as inadequate computing knowledge; and gives…

  19. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  20. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  1. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  2. Children, Computers, and Powerful Ideas

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Today it is commonplace that computers and technology permeate almost every aspect of education. In the late 1960s, though, the idea that computers could serve as a catalyst for thinking about the way children learn was a radical concept. In the early 1960s, Seymour Papert joined the faculty of MIT and founded the Artificial Intelligence Lab with…

  3. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  4. PERSPECTIVE VIEW OF EAST ELEVATION OF POWER BUILDING WITH ADDITION. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PERSPECTIVE VIEW OF EAST ELEVATION OF POWER BUILDING WITH ADDITION. NOTE WINDOW OPENINGS, WHICH ARE MERELY OPENINGS IN THE BOARD AND BATTEN SIDING AND REVEAL THE CONCRETE BLOCK CONSTRUCTION OF THE BUILDING. - Radar Station B-71, Power Building, Coastal Drive, Klamath, Del Norte County, CA

  5. Exploring human inactivity in computer power consumption

    NASA Astrophysics Data System (ADS)

    Candrawati, Ria; Hashim, Nor Laily Binti

    2016-08-01

    Managing computer power consumption has become an important challenge in computer society and this is consistent with a trend where a computer system is more important to modern life together with a request for increased computing power and functions continuously. Unfortunately, previous approaches are still inadequately designed to handle the power consumption problem due to unpredictable workload of a system caused by unpredictable human behaviors. This is happens due to lack of knowledge in a software system and the software self-adaptation is one approach in dealing with this source of uncertainty. Human inactivity is handled by adapting the behavioral changes of the users. This paper observes human inactivity in the computer usage and finds that computer power usage can be reduced if the idle period can be intelligently sensed from the user activities. This study introduces Control, Learn and Knowledge model that adapts the Monitor, Analyze, Planning, Execute control loop integrates with Q Learning algorithm to learn human inactivity period to minimize the computer power consumption. An experiment to evaluate this model was conducted using three case studies with same activities. The result show that the proposed model obtained those 5 out of 12 activities shows the power decreasing compared to others.

  6. Computing Efficiency Of Transfer Of Microwave Power

    NASA Technical Reports Server (NTRS)

    Pinero, L. R.; Acosta, R.

    1995-01-01

    BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.

  7. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  8. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Additional Committee powers. 453.06 Section 453.06 Wildlife and Fisheries JOINT REGULATIONS (UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR AND NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED...

  9. Power of one qumode for quantum computation

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Thompson, Jayne; Weedbrook, Christian; Lloyd, Seth; Vedral, Vlatko; Gu, Mile; Modi, Kavan

    2016-05-01

    Although quantum computers are capable of solving problems like factoring exponentially faster than the best-known classical algorithms, determining the resources responsible for their computational power remains unclear. An important class of problems where quantum computers possess an advantage is phase estimation, which includes applications like factoring. We introduce a computational model based on a single squeezed state resource that can perform phase estimation, which we call the power of one qumode. This model is inspired by an interesting computational model known as deterministic quantum computing with one quantum bit (DQC1). Using the power of one qumode, we identify that the amount of squeezing is sufficient to quantify the resource requirements of different computational problems based on phase estimation. In particular, we can use the amount of squeezing to quantitatively relate the resource requirements of DQC1 and factoring. Furthermore, we can connect the squeezing to other known resources like precision, energy, qudit dimensionality, and qubit number. We show the circumstances under which they can likewise be considered good resources.

  10. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  11. Software Support for Transiently Powered Computers

    SciTech Connect

    Van Der Woude, Joel Matthew

    2015-06-01

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate the correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.

  12. Reducing power consumption during execution of an application on a plurality of compute nodes

    SciTech Connect

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  13. 4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. Also includes plot plan at 1 inch to 100 feet. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 3. Plan no. 10,548. Scale 1/4 inch and h inch to the foot. April 30, 1945, last revised 6/22/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  14. 3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 4. Plan no. 10,548. Scale 1/4 inch to the foot, elevations, and one inch to the foot, sections and details. April 30, 1945, last revised 6/19/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  15. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  16. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  17. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  18. Lithium Dinitramide as an Additive in Lithium Power Cells

    NASA Technical Reports Server (NTRS)

    Gorkovenko, Alexander A.

    2007-01-01

    Lithium dinitramide, LiN(NO2)2 has shown promise as an additive to nonaqueous electrolytes in rechargeable and non-rechargeable lithium-ion-based electrochemical power cells. Such non-aqueous electrolytes consist of lithium salts dissolved in mixtures of organic ethers, esters, carbonates, or acetals. The benefits of adding lithium dinitramide (which is also a lithium salt) include lower irreversible loss of capacity on the first charge/discharge cycle, higher cycle life, lower self-discharge, greater flexibility in selection of electrolyte solvents, and greater charge capacity. The need for a suitable electrolyte additive arises as follows: The metallic lithium in the anode of a lithium-ion-based power cell is so highly reactive that in addition to the desired main electrochemical reaction, it engages in side reactions that cause formation of resistive films and dendrites, which degrade performance as quantified in terms of charge capacity, cycle life, shelf life, first-cycle irreversible capacity loss, specific power, and specific energy. The incidence of side reactions can be reduced through the formation of a solid-electrolyte interface (SEI) a thin film that prevents direct contact between the lithium anode material and the electrolyte. Ideally, an SEI should chemically protect the anode and the electrolyte from each other while exhibiting high conductivity for lithium ions and little or no conductivity for electrons. A suitable additive can act as an SEI promoter. Heretofore, most SEI promotion was thought to derive from organic molecules in electrolyte solutions. In contrast, lithium dinitramide is inorganic. Dinitramide compounds are known as oxidizers in rocket-fuel chemistry and until now, were not known as SEI promoters in battery chemistry. Although the exact reason for the improvement afforded by the addition of lithium dinitramide is not clear, it has been hypothesized that lithium dinitramide competes with other electrolyte constituents to react with

  19. Power of surface-based DNA computation

    SciTech Connect

    Cai, Weiping; Condon, A.E.; Corn, R.M.

    1997-12-01

    A new model of DNA computation that is based on surface chemistry is studied. Such computations involve the manipulation of DNA strands that are immobilized on a surface, rather than in solution as in the work of Adleman. Surface-based chemistry has been a critical technology in many recent advances in biochemistry and offers several advantages over solution-based chemistry, including simplified handling of samples and elimination of loss of strands, which reduce error in the computation. The main contribution of this paper is in showing that in principle, surface-based DNA chemistry can efficiently support general circuit computation on many inputs in parallel. To do this, an abstract model of computation that allows parallel manipulation of binary inputs is described. It is then shown that this model can be implemented by encoding inputs as DNA strands and repeatedly modifying the strands in parallel on a surface, using the chemical processes of hybridization, exonuclease degradation, polymerase extension, and ligation. Thirdly, it is shown that the model supports efficient circuit simulation in the following sense: exactly those inputs that satisfy a circuit can be isolated and the number of parallel operations needed to do this is proportional to the size of the circuit. Finally, results are presented on the power of the model when another resource of DNA computation is limited, namely strand length. 12 refs.

  20. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  1. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  2. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  3. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-01-25

    This is the eighth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two coal types and two gasifier types. Good agreement with DOE computed values has been obtained for the Vision 21 configuration under ''baseline'' conditions. Additional model verification has been performed for the flowing slag model that has been implemented into the CFD based gasifier model. Comparisons for the slag, wall and syngas conditions predicted by our model versus values from predictive models that have been published by other researchers show good agreement. The software infrastructure of the Vision 21 workbench has been modified to use a recently released, upgraded version of SCIRun.

  4. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  5. System and method for high power diode based additive manufacturing

    DOEpatents

    El-Dasher, Bassem S.; Bayramian, Andrew; Demuth, James A.; Farmer, Joseph C.; Torres, Sharon G.

    2016-04-12

    A system is disclosed for performing an Additive Manufacturing (AM) fabrication process on a powdered material forming a substrate. The system may make use of a diode array for generating an optical signal sufficient to melt a powdered material of the substrate. A mask may be used for preventing a first predetermined portion of the optical signal from reaching the substrate, while allowing a second predetermined portion to reach the substrate. At least one processor may be used for controlling an output of the diode array.

  6. IBM Cloud Computing Powering a Smarter Planet

    NASA Astrophysics Data System (ADS)

    Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu

    With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.

  7. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  8. Computing and cognition in future power-plant operations

    SciTech Connect

    Kisner, R.A.; Sheridan, T.B.

    1983-01-01

    The intent of this paper is to speculate on the nature of future interactions between people and computers in the operation of power plants. In particular, the authors offer a taxonomy for examining the differing functions of operators in interacting with the plant and its computers, and the differing functions of the computers in interacting with the plant and its operators.

  9. "Old" tail lobes provide significant additional substorm power

    NASA Astrophysics Data System (ADS)

    Mishin, V.; Mishin, V. V.; Karavaev, Y.

    2012-12-01

    In each polar cap (PC) we mark out "old PC" observed during quiet time before the event under consideration, and "new PC" that emerges during rounding the old one and expanding the PC total area. Old and new PCs correspond in the magnetosphere to the old and new tail lobes, respectively. The new lobe variable magnetic flux Ψ1 is usually assumed to be active, i.e. it provides transport of the electromagnetic energy flux (Poynting flux) ɛ' from solar wind into the magnetosphere. The old lobe magnetic flux Ψ2 is usually supposed to be passive, i.e. it remains constant during the disturbance and does not participate in the transporting process which would mean the old PC electric field absolute screening from the convection electric field created by the magnetopause reconnection. In fact, screening is observed, but it is far from absolute. We suggest a model of screening and determine its quantitative characteristics in the selected superstorm. The coefficient of a screening is the β = Ψ2/Ψ02, where Ψ02 = const is open magnetic flux through the old PC measured prior to the substorm, and Ψ2 is variable magnetic flux during the substorm. We consider three various regimes of disturbance. In each, the coefficient β decreased during the loading phase and increased at the unloading phase, but the rates and amplitudes of variations exhibited a strong dependence on the regime. We interpreted decrease in β as a result of involving the old PC magnetic flux Ψ2, which was considered to be constant earlier, to the ' transport process of the Poynting flux from the solar wind into the magnetosphere. A weakening of the transport process at the subsequent unloading phase creates increase in β. Estimates showed that coefficient β during each regime and the computed Poynting flux varied manifolds. In general, unlike the existing substorm conception, the new scenario describes an unknown earlier tail lobe activation process during a substorm growth phase that effectively

  10. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  11. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  12. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations § 201.14 Computation of time,...

  13. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  14. DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS (REAR), ROOM 8A - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  15. Controlling High Power Devices with Computers or TTL Logic Circuits

    ERIC Educational Resources Information Center

    Carlton, Kevin

    2002-01-01

    Computers are routinely used to control experiments in modern science laboratories. This should be reflected in laboratories in an educational setting. There is a mismatch between the power that can be delivered by a computer interfacing card or a TTL logic circuit and that required by many practical pieces of laboratory equipment. One common way…

  16. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  17. The Utility of Computer-Assisted Power Analysis Lab Instruction

    ERIC Educational Resources Information Center

    Petrocelli, John V.

    2007-01-01

    Undergraduate students (N = 47), enrolled in 2 separate psychology research methods classes, evaluated a power analysis lab demonstration and homework assignment. Students attended 1 of 2 lectures that included a basic introduction to power analysis and sample size analysis. One lecture included a demonstration of how to use a computer-based power…

  18. ``Cloud computations'' for chemical departments of power stations

    NASA Astrophysics Data System (ADS)

    Ochkov, V. F.; Chudova, Yu. V.; Minaeva, E. A.

    2009-07-01

    The notion of “cloud computations” is defined, and examples of such computations carried out at the Moscow Power Engineering Institute are given. Calculations of emissions discharged into the atmosphere from steam and hot-water boilers, as well as other calculations presented on the Internet that are of interest for power stations, are shown.

  19. Power Measurement for High Performance Computing: State of the Art

    SciTech Connect

    Hsu, Chung-Hsing; Poole, Stephen W

    2011-01-01

    Power utilization is a primary concern for high performance computing (HPC). Understanding it through physical measurements provides the critical first step to developing effective control techniques, yet obtaining power measurements remains an ad hoc process. In this paper, we survey popular measurement methods for HPC in terms of their measurement domains. We point out that the measurement process is slowly being standardized, and the real challenge lies in the real-time analysis of massive power data.

  20. A Computational Workbench Environment For Virtual Power Plant Simulation

    SciTech Connect

    Bockelie, Michael J.; Swensen, David A.; Denison, Martin K.; Sarofim, Adel F.

    2001-11-06

    In this paper we describe our progress toward creating a computational workbench for performing virtual simulations of Vision 21 power plants. The workbench provides a framework for incorporating a full complement of models, ranging from simple heat/mass balance reactor models that run in minutes to detailed models that can require several hours to execute. The workbench is being developed using the SCIRun software system. To leverage a broad range of visualization tools the OpenDX visualization package has been interfaced to the workbench. In Year One our efforts have focused on developing a prototype workbench for a conventional pulverized coal fired power plant. The prototype workbench uses a CFD model for the radiant furnace box and reactor models for downstream equipment. In Year Two and Year Three, the focus of the project will be on creating models for gasifier based systems and implementing these models into an improved workbench. In this paper we describe our work effort for Year One and outline our plans for future work. We discuss the models included in the prototype workbench and the software design issues that have been addressed to incorporate such a diverse range of models into a single software environment. In addition, we highlight our plans for developing the energyplex based workbench that will be developed in Year Two and Year Three.

  1. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

    2004-01-28

    This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

  2. Future Computing Platforms for Science in a Power Constrained Era

    SciTech Connect

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-23

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  3. Future Computing Platforms for Science in a Power Constrained Era

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  4. Future computing platforms for science in a power constrained era

    DOE PAGESBeta

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less

  5. Future computing platforms for science in a power constrained era

    SciTech Connect

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  6. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  7. Harmonic analysis of spacecraft power systems using a personal computer

    NASA Technical Reports Server (NTRS)

    Williamson, Frank; Sheble, Gerald B.

    1989-01-01

    The effects that nonlinear devices such as ac/dc converters, HVDC transmission links, and motor drives have on spacecraft power systems are discussed. The nonsinusoidal currents, along with the corresponding voltages, are calculated by a harmonic power flow which decouples and solves for each harmonic component individually using an iterative Newton-Raphson algorithm. The sparsity of the harmonic equations and the overall Jacobian matrix is used to an advantage in terms of saving computer memory space and in terms of reducing computation time. The algorithm could also be modified to analyze each harmonic separately instead of all at the same time.

  8. 19 CFR 210.6 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 210.6 Section 210.6 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Rules of General Applicability §...

  9. Jaguar: The World?s Most Powerful Computer

    SciTech Connect

    Bland, Arthur S Buddy; Rogers, James H; Kendall, Ricky A; Kothe, Douglas B; Shipman, Galen M

    2009-01-01

    The Cray XT system at ORNL is the world s most powerful computer with several applications exceeding one-petaflops performance. This paper describes the architecture of Jaguar with combined XT4 and XT5 nodes along with an external Lustre file system and external login nodes. We also present some early results from Jaguar.

  10. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  11. Flash on disk for low-power multimedia computing

    NASA Astrophysics Data System (ADS)

    Singleton, Leo; Nathuji, Ripal; Schwan, Karsten

    2007-01-01

    Mobile multimedia computers require large amounts of data storage, yet must consume low power in order to prolong battery life. Solid-state storage offers low power consumption, but its capacity is an order of magnitude smaller than the hard disks needed for high-resolution photos and digital video. In order to create a device with the space of a hard drive, yet the low power consumption of solid-state storage, hardware manufacturers have proposed using flash memory as a write buffer on mobile systems. This paper evaluates the power savings of such an approach and also considers other possible flash allocation algorithms, using both hardware- and software-level flash management. Its contributions also include a set of typical multimedia-rich workloads for mobile systems and power models based upon current disk and flash technology. Based on these workloads, we demonstrate an average power savings of 267 mW (53% of disk power) using hardware-only approaches. Next, we propose another algorithm, termed Energy-efficient Virtual Storage using Application-Level Framing (EVS-ALF), which uses both hardware and software for power management. By collecting information from the applications and using this metadata to perform intelligent flash allocation and prefetching, EVS-ALF achieves an average power savings of 307 mW (61%), another 8% improvement over hardware-only techniques.

  12. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  13. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  14. Design of Power Quality Monitor Based on Embedded Industrial Computer

    NASA Astrophysics Data System (ADS)

    Junfeng, Huang; Hao, Sun; Xiaolin, Wei

    A design about electric power quality monitor device based on embedded industrial computer was proposed. On this basis, we introduced the framework and arithmetic of the device. Because of the existence of the harmonic disturbance, a scheme of adding windows combined with interpolation arithmetic was used to promote the detection accuracy; In the meanwhile, by means of the programming tool of Delphi, a good interface was designed. Through the experiment, we justify the device shows the well reliability and practicability.

  15. Value of Faster Computation for Power Grid Operation

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Elizondo, Marcelo A.

    2012-09-30

    As a result of the grid evolution meeting the information revolution, the power grid is becoming far more complex than it used to be. How to feed data in, perform analysis, and extract information in a real-time manner is a fundamental challenge in today’s power grid operation, not to mention the significantly increased complexity in the smart grid environment. Therefore, high performance computing (HPC) becomes one of the advanced technologies used to meet the requirement of real-time operation. This paper presents benefit case studies to show the value of fast computation for operation. Two fundamental operation functions, state estimation (SE) and contingency analysis (CA), are used as examples. In contrast with today’s tools, fast SE can estimate system status in a few seconds—comparable to measurement cycles. Fast CA can solve more contingencies in a shorter period, reducing the possibility of missing critical contingencies. The benefit case study results clearly show the value of faster computation for increasing the reliability and efficiency of power system operation.

  16. Rotating Detonation Combustion: A Computational Study for Stationary Power Generation

    NASA Astrophysics Data System (ADS)

    Escobar, Sergio

    The increased availability of gaseous fossil fuels in The US has led to the substantial growth of stationary Gas Turbine (GT) usage for electrical power generation. In fact, from 2013 to 2104, out of the 11 Tera Watts-hour per day produced from fossil fuels, approximately 27% was generated through the combustion of natural gas in stationary GT. The thermodynamic efficiency for simple-cycle GT has increased from 20% to 40% during the last six decades, mainly due to research and development in the fields of combustion science, material science and machine design. However, additional improvements have become more costly and more difficult to obtain as technology is further refined. An alternative to improve GT thermal efficiency is the implementation of a combustion regime leading to pressure-gain; rather than pressure loss across the combustor. One concept being considered for such purpose is Rotating Detonation Combustion (RDC). RDC refers to a combustion regime in which a detonation wave propagates continuously in the azimuthal direction of a cylindrical annular chamber. In RDC, the fuel and oxidizer, injected from separated streams, are mixed near the injection plane and are then consumed by the detonation front traveling inside the annular gap of the combustion chamber. The detonation products then expand in the azimuthal and axial direction away from the detonation front and exit through the combustion chamber outlet. In the present study Computational Fluid Dynamics (CFD) is used to predict the performance of Rotating Detonation Combustion (RDC) at operating conditions relevant to GT applications. As part of this study, a modeling strategy for RDC simulations was developed. The validation of the model was performed using benchmark cases with different levels of complexity. First, 2D simulations of non-reactive shock tube and detonation tubes were performed. The numerical predictions that were obtained using different modeling parameters were compared with

  17. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  18. A micro-computer program package for the computation of intraocular lens powers.

    PubMed

    Etienne, C E

    1984-05-01

    An accurate micro-computer software package has been designed to assist in the calculation of IOL powers. This program set should be able to be used on any Z80 computer with the CPM operating system. It accepts measurement data, calculates the implant powers and their estimated postoperative refractions, using the same formulas as the Binkhorst IOL Power Module used in the TI- 58C programmable calculator. The advantages are that the method of entry of data helps to eliminate the clerical errors such as number transposition and impossible value entry. The program also allows for the storage of the patient information in a disc file for future use. This can speed the search for a patient's records and help to increase the efficiency of the office. The program can be executed in much less time and with greater accuracy and legibility than is possible with the TI- 58C programs. PMID:6547223

  19. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  20. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  1. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  2. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  3. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  4. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines

    PubMed Central

    Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca

    2013-01-01

    Summary The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol−1 and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG ‡ and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  5. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines.

    PubMed

    Gansäuer, Andreas; Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca; Grimme, Stefan

    2013-01-01

    The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol(-1) and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG (‡) and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  6. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  7. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Adel Sarofim; Bene Risio

    2002-07-28

    This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No.: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of the IGCC workbench. A series of parametric CFD simulations for single stage and two stage generic gasifier configurations have been performed. An advanced flowing slag model has been implemented into the CFD based gasifier model. A literature review has been performed on published gasification kinetics. Reactor models have been developed and implemented into the workbench for the majority of the heat exchangers, gas clean up system and power generation system for the Vision 21 reference configuration. Modifications to the software infrastructure of the workbench have been commenced to allow interfacing to the workbench reactor models that utilize the CAPE{_}Open software interface protocol.

  8. HMcode: Halo-model matter power spectrum computation

    NASA Astrophysics Data System (ADS)

    Mead, Alexander

    2015-08-01

    HMcode computes the halo-model matter power spectrum. It is written in Fortran90 and has been designed to quickly (~0.5s for 200 k-values across 16 redshifts on a single core) produce matter spectra for a wide range of cosmological models. In testing it was shown to match spectra produced by the 'Coyote Emulator' to an accuracy of 5 per cent for k less than 10h Mpc^-1. However, it can also produce spectra well outside of the parameter space of the emulator.

  9. Computing the acoustic radiation force exerted on a sphere using the translational addition theorem.

    PubMed

    Silva, Glauber T; Baggio, André L; Lopes, J Henrique; Mitri, Farid G

    2015-03-01

    In this paper, the translational addition theorem for spherical functions is employed to calculate the acoustic radiation force produced by an arbitrary shaped beam on a sphere arbitrarily suspended in an inviscid fluid. The procedure is also based on the partial-wave expansion method, which depends on the beam-shape and scattering coefficients. Given a set of beam-shape coefficients (BSCs) for an acoustic beam relative to a reference frame, the translational addition theorem can be used to obtain the BSCs relative to the sphere positioned anywhere in the medium. The scattering coefficients are obtained from the acoustic boundary conditions across the sphere's surface. The method based on the addition theorem is particularly useful to avoid quadrature schemes to obtain the BSCs. We use it to compute the acoustic radiation force exerted by a spherically focused beam (in the paraxial approximation) on a silicone-oil droplet (compressible fluid sphere). The analysis is carried out in the Rayleigh (i.e., the particle diameter is much smaller than the wavelength) and Mie (i.e., the particle diameter is of the order of the wavelength or larger) scattering regimes. The obtained results show that the paraxial focused beam can only trap particles in the Rayleigh scattering regime. PMID:25768823

  10. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  11. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  12. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Khairallah, S. A.; Kamath, C.; Rubenchik, A. M.

    2015-12-15

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  13. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  14. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGESBeta

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  15. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  16. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  17. Computing Power and Sample Size for Informational Odds Ratio †

    PubMed Central

    Efird, Jimmy T.

    2013-01-01

    The informational odds ratio (IOR) measures the post-exposure odds divided by the pre-exposure odds (i.e., information gained after knowing exposure status). A desirable property of an adjusted ratio estimate is collapsibility, wherein the combined crude ratio will not change after adjusting for a variable that is not a confounder. Adjusted traditional odds ratios (TORs) are not collapsible. In contrast, Mantel-Haenszel adjusted IORs, analogous to relative risks (RRs) generally are collapsible. IORs are a useful measure of disease association in case-referent studies, especially when the disease is common in the exposed and/or unexposed groups. This paper outlines how to compute power and sample size in the simple case of unadjusted IORs. PMID:24157518

  18. Intrinsic universality and the computational power of self-assembly.

    PubMed

    Woods, Damien

    2015-07-28

    Molecular self-assembly, the formation of large structures by small pieces of matter sticking together according to simple local interactions, is a ubiquitous phenomenon. A challenging engineering goal is to design a few molecules so that large numbers of them can self-assemble into desired complicated target objects. Indeed, we would like to understand the ultimate capabilities and limitations of this bottom-up fabrication process. We look to theoretical models of algorithmic self-assembly, where small square tiles stick together according to simple local rules in order to carry out a crystal growth process. In this survey, we focus on the use of simulation between such models to classify and separate their computational and expressive powers. Roughly speaking, one model simulates another if they grow the same structures, via the same dynamical growth processes. Our journey begins with the result that there is a single intrinsically universal tile set that, with appropriate initialization and spatial scaling, simulates any instance of Winfree's abstract Tile Assembly Model. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system in a very direct way. From there we find that there is no such tile set in the more restrictive non-cooperative model, proving it weaker than the full Tile Assembly Model. In the two-handed model, where large structures can bind together in one step, we encounter an infinite set of infinite hierarchies of strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that simulates any tile assembly system. We find another tile that aperiodically tiles the plane (but with small gaps). These and other recent results show that simulation is giving rise to a kind of computational complexity theory for self-assembly. It seems this could be the beginning of a much longer journey

  19. Thermoelectric Power Generation from Lanthanum Strontium Titanium Oxide at Room Temperature through the Addition of Graphene.

    PubMed

    Lin, Yue; Norman, Colin; Srivastava, Deepanshu; Azough, Feridoon; Wang, Li; Robbins, Mark; Simpson, Kevin; Freer, Robert; Kinloch, Ian A

    2015-07-29

    The applications of strontium titanium oxide based thermoelectric materials are currently limited by their high operating temperatures of >700 °C. Herein, we show that the thermal operating window of lanthanum strontium titanium oxide (LSTO) can be reduced to room temperature by the addition of a small amount of graphene. This increase in operating performance will enable future applications such as generators in vehicles and other sectors. The LSTO composites incorporated one percent or less of graphene and were sintered under an argon/hydrogen atmosphere. The resultant materials were reduced and possessed a multiphase structure with nanosized grains. The thermal conductivity of the nanocomposites decreased upon the addition of graphene, whereas the electrical conductivity and power factor both increased significantly. These factors, together with a moderate Seebeck coefficient, meant that a high power factor of ∼2500 μWm(-1)K(-2) was reached at room temperature at a loading of 0.6 wt % graphene. The highest thermoelectric figure of merit (ZT) was achieved when 0.6 wt % graphene was added (ZT = 0.42 at room temperature and 0.36 at 750 °C), with >280% enhancement compared to that of pure LSTO. A preliminary 7-couple device was produced using bismuth strontium cobalt oxide/graphene-LSTO pucks. This device had a Seebeck coefficient of ∼1500 μV/K and an open voltage of 600 mV at a mean temperature of 219 °C. PMID:26095083

  20. Dynamic effect of sodium-water reaction in fast flux test facility power addition sodium pipes

    SciTech Connect

    Huang, S.N.; Anderson, M.J.

    1990-03-01

    The Fast Flux Facility (FFTF) is a demonstration and test facility of the sodium-cooled fast breeder reactor. A power addition'' to the facility is being considered to convert some of the dumped, unused heat into electricity generation. Components and piping systems to be added are sodium-water steam generators, sodium loop extensions from existing dump heat exchangers to sodium-water steam generators, and conventional water/steam loops. The sodium loops can be subjected to the dynamic loadings of pressure pulses that are caused by postulated sodium leaks and subsequent sodium-water reaction in the steam generator. The existing FFTF secondary pipes and the new power addition sodium loops were evaluated for exposure to the dynamic effect of the sodium-water reaction. Elastic and simplified inelastic dynamic analyses were used in this feasibility study. The results indicate that both the maximum strain and strain range are within the allowable limits. Several cycles of the sodium-water reaction can be sustained by the sodium pipes that are supported by ordinary pipe supports and seismic restraints. Expensive axial pipe restraints to withstand the sodium-water reaction loads are not needed, because the pressure-pulse-induced alternating bending stresses act as secondary stresses and the pressure pulse dynamic effect is a deformation-controlled quantity and is self-limiting. 14 refs., 7 figs., 3 tabs.

  1. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-04-25

    This is the tenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two gasifier types. An improved process model for simulating entrained flow gasifiers has been implemented into the workbench. Model development has focused on: a pre-processor module to compute global gasification parameters from standard fuel properties and intrinsic rate information; a membrane based water gas shift; and reactors to oxidize fuel cell exhaust gas. The data visualization capabilities of the workbench have been extended by implementing the VTK visualization software that supports advanced visualization methods, including inexpensive Virtual Reality techniques. The ease-of-use, functionality and plug-and-play features of the workbench were highlighted through demonstrations of the workbench at a DOE sponsored coal utilization conference. A white paper has been completed that contains recommendations on the use of component architectures, model interface protocols and software frameworks for developing a Vision 21 plant simulator.

  2. Complex additive systems for Mn-Zn ferrites with low power loss

    SciTech Connect

    Töpfer, J. Angermann, A.

    2015-05-07

    Mn-Zn ferrites were prepared via an oxalate-based wet-chemical synthesis process. Nanocrystalline ferrite powders with particle size of 50 nm were sintered at 1150 °C with 500 ppm CaO and 100 ppm SiO{sub 2} as standard additives. A fine-grained, dense microstructure with grain size of 4–5 μm was obtained. Simultaneous addition of Nb{sub 2}O{sub 5}, ZrO{sub 2}, V{sub 2}O{sub 5}, and SnO{sub 2} results low power losses, e.g., 65 mW/cm{sup 3} (500 kHz, 50 mT, 80 °C) and 55 mW/cm{sup 3} (1 MHz, 25 mT, 80 °C). Loss analysis shows that eddy current and residual losses were minimized through formation of insulating grain boundary phases, which is confirmed by transmission electron microscopy. Addition of SnO{sub 2} increases the ferrous ion concentration and affects anisotropy as reflected in permeability measurements μ(T)

  3. Complex additive systems for Mn-Zn ferrites with low power loss

    NASA Astrophysics Data System (ADS)

    Töpfer, J.; Angermann, A.

    2015-05-01

    Mn-Zn ferrites were prepared via an oxalate-based wet-chemical synthesis process. Nanocrystalline ferrite powders with particle size of 50 nm were sintered at 1150 °C with 500 ppm CaO and 100 ppm SiO2 as standard additives. A fine-grained, dense microstructure with grain size of 4-5 μm was obtained. Simultaneous addition of Nb2O5, ZrO2, V2O5, and SnO2 results low power losses, e.g., 65 mW/cm3 (500 kHz, 50 mT, 80 °C) and 55 mW/cm3 (1 MHz, 25 mT, 80 °C). Loss analysis shows that eddy current and residual losses were minimized through formation of insulating grain boundary phases, which is confirmed by transmission electron microscopy. Addition of SnO2 increases the ferrous ion concentration and affects anisotropy as reflected in permeability measurements μ(T).

  4. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  5. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  6. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  7. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  8. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  9. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  10. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  11. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  12. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  13. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  14. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  15. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  16. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda A.; Ratterman, Joseph D.; Smith, Brian E.

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  17. Cooperative Learning, Computers, and Writing: Maximizing Instructional Power.

    ERIC Educational Resources Information Center

    Male, Mary

    1992-01-01

    Discusses cooperative learning strategies that lend themselves to working on computers. Advocates the use of learner-centered software. Describes the essential ingredients of cooperative computer lessons. (SR)

  18. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  19. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  20. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  1. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-31

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three.

  2. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-04-30

    This is the sixth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of our IGCC workbench. Preliminary CFD simulations for single stage and two stage ''generic'' gasifiers using firing conditions based on the Vision 21 reference configuration have been performed. Work is continuing on implementing an advanced slagging model into the CFD based gasifier model. An investigation into published gasification kinetics has highlighted a wide variance in predicted performance due to the choice of kinetic parameters. A plan has been outlined for developing the reactor models required to simulate the heat transfer and gas clean up equipment downstream of the gasifier. Three models that utilize the CCA software protocol have been integrated into a version of the IGCC workbench. Tests of a CCA implementation of our CFD code into the workbench demonstrated that the CCA CFD module can execute on a geographically remote PC (linked via the Internet) in a manner that is transparent to the user. Software tools to create ''walk-through'' visualizations of the flow field within a gasifier have been demonstrated.

  3. Addition of flexible body option to the TOLA computer program. Part 2: User and programmer documentation

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    User and programmer oriented documentation for the flexible body option of the Takeoff and Landing Analysis (TOLA) computer program are provided. The user information provides sufficient knowledge of the development and use of the option to enable the engineering user to successfully operate the modified program and understand the results. The programmer's information describes the option structure and logic enabling a programmer to make major revisions to this part of the TOLA computer program.

  4. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  5. [A novel compact digital amplifier to control a medical equipment (Graseby 3500 pump) by Macintosh computer without external power supply].

    PubMed

    Nakao, M

    1998-05-01

    Most medical equipments now incorporate a digital RS-232 C interface, which enables to log data and/or control equipments by computers. Although the standard voltage specification of RS-232 C is minimum +/- 3 volts, certain devices, such as Graseby syringe driven pump model 3500 (U.K.), need higher voltage than the standard to reduce the risk of electromagnetic interference. Since the output voltage of serial port of Macintosh (Apple Computer Inc.) computer and other portable computers is around +/- 5 volts, an additional voltage amplifier is necessary to control such external devices. A compact digital signal amplifier was developed by using a general RS-232 C driver IC (MAX 232 E, Maxim USA). Bulky external power supply could be eliminated, because power was obtained through digital signal lines themselves. PMID:9621678

  6. Can Computer-Assisted Discovery Learning Foster First Graders' Fluency with the Most Basic Addition Combinations?

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Eiland, Michael D.; Purpura, David J.; Reid, Erin E.

    2013-01-01

    In a 9-month training experiment, 64 first graders with a risk factor were randomly assigned to computer-assisted structured discovery of the add-1 rule (e.g., the sum of 7 + 1 is the number after "seven" when we count), unstructured discovery learning of this regularity, or an active-control group. Planned contrasts revealed that the add-1…

  7. Identification of Students' Intuitive Mental Computational Strategies for 1, 2 and 3 Digits Addition and Subtraction: Pedagogical and Curricular Implications

    ERIC Educational Resources Information Center

    Ghazali, Munirah; Alias, Rohana; Ariffin, Noor Asrul Anuar; Ayub, Ayminsyadora

    2010-01-01

    This paper reports on a study to examine mental computation strategies used by Year 1, Year 2, and Year 3 students to solve addition and subtraction problems. The participants in this study were twenty five 7 to 9 year-old students identified as excellent, good and satisfactory in their mathematics performance from a school in Penang, Malaysia.…

  8. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  9. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  10. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.

  11. The Power of Qutrit Logic for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Ma, Song-Ya; Chen, Xiu-Bo; Yang, Yi-Xian

    2013-08-01

    The critical merits acquired from quantum computation require running in parallel, which cannot be benefited from previous multi-level extensions and are exact our purposes. In this paper, with qutrit subsystems the general quantum computation further reduces into qutrit gates or its controlled operations. This extension plays parallizable and integrable with same construction independent of the qutrit numbers. The qutrit swapping as its basic operations for controlling can be integrated into quantum computers with present physical techniques. Our generalizations are free of elevating the system spaces, and feasible for the universal computation.

  12. Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Harder, R. L.

    1973-01-01

    A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.

  13. Restructuring the introductory physics lab with the addition of computer-based laboratories

    PubMed Central

    Pierri-Galvao, Monica

    2011-01-01

    Nowadays, data acquisition software and sensors are being widely used in introductory physics laboratories. This allows the student to spend more time exploring the data that is collected by the computer hence focusing more on the physical concept. Very often, a faculty is faced with the challenge of updating or introducing a microcomputer-based laboratory (MBL) at his or her institution. This article will provide a list of experiments and equipment needed to convert about half of the traditional labs on a 1-year introductory physics lab into MBLs. PMID:22346229

  14. Companies Reaching for the Clouds for Computing Power

    SciTech Connect

    Madison, Alison L.

    2012-10-07

    By now, we’ve likely all at least heard of cloud computing, and to some extent may grasp what it’s all about. But after delving into a recent article in The New York Times, I came to realize just how big of a deal it is--much bigger than my own limited experience with it had allowed me to see. Cloud computing is the use of hardware or software computing resources that are delivered as a service over a network, typically via the web. The gist of it is, almost anything you can imagine doing with your computer system doesn’t have to physically exist on your system or in your office in order to be accessible to you. You can entrust remote services with your data, software, and computation. It’s easier, and also much less expensive.

  15. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  16. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  17. Indolyne Experimental and Computational Studies: Synthetic Applications and Origins of Selectivities of Nucleophilic Additions

    PubMed Central

    Im, G-Yoon J.; Bronner, Sarah M.; Goetz, Adam E.; Paton, Robert S.; Cheong, Paul H.-Y.; Houk, K. N.; Garg, Neil K.

    2010-01-01

    Efficient syntheses of 4,5-, 5,6-, and 6,7-indolyne precursors beginning from commercially available hydroxyindole derivatives are reported. The synthetic routes are versatile and allow access to indolyne precursors that remain unsubstituted on the pyrrole ring. Indolynes can be generated under mild fluoride-mediated conditions, trapped by a variety of nucleophilic reagents, and used to access a number of novel substituted indoles. Nucleophilic addition reactions to indolynes proceed with varying degrees of regioselectivity; distortion energies control regioselectivity and provide a simple model to predict the regioselectivity in the nucleophilic additions to indolynes and other unsymmetrical arynes. This model has led to the design of a substituted 4,5-indolyne that exhibits enhanced nucleophilic regioselectivity. PMID:21114321

  18. Addition of visual noise boosts evoked potential-based brain-computer interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-01-01

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs. PMID:24828128

  19. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  20. Measurements and computations of electromagnetic fields in electric power substations

    SciTech Connect

    Daily, W.K. ); Dawalibi, F. )

    1994-01-01

    The magnetic fields generated by a typical distribution substation were measured and calculated based on a computer model which takes into account currents in the grounding systems, distribution feeder neutrals, overhead ground wires and induced currents in equipment structures and ground grid loops. Both measured and computer results indicate that magnetic fields are significantly influenced by ground currents, as well as induced currents in structures and ground system loops. All currents in the network modeled were computed, based on the measured currents impressed at the boundary points (ends of the conductor network). The agreement between the measured and computer values is good. Small differences were observed and are attributed mainly to uncertainties in the geometry of the network model and phase angles of some of the currents in the neutral conductors which were not measured in the field. Further measurements, including more accurate geometrical information and phase angles, are planned.

  1. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  2. Five Mass Power Transmission Line of a Ship Computer Modelling

    NASA Astrophysics Data System (ADS)

    Kazakoff, Alexander Borisoff; Marinov, Boycho Ivanov

    2016-03-01

    The work, presented in this paper, appears to be a natural continuation of the work presented and reported before, on the design of power transmission line of a ship, but with different multi-mass model. Some data from the previous investigations are used as a reference data, mainly from the analytical investigations, for the developed in the previ- ous study, frequency and modal analysis of a five mass model of a power transmission line of a ship. In the paper, a profound dynamic analysis of a concrete five mass dynamic model of the power transmission line of a ship is performed using Finite Element Analysis (FEA), based on the previously recommended model, investigated in the previous research and reported before. Thus, the partially validated by frequency analysis five mass model of a power transmission line of a ship is subjected to dynamic analysis. The objective of the work presented in this paper is dynamic modelling of a five mass transmission line of a ship, partial validation of the model and von Mises stress analysis calculation with the help of Finite Element Analysis (FEA) and comparison of the derived results with the analytically calculated values. The partially validated five mass power transmission line of a ship can be used for definition of many dy- namic parameters, particularly amplitude of displacement, velocity and acceleration, respectively in time and frequency domain. The frequency behaviour of the model parameters is investigated in frequency domain and it corresponds to the predicted one.

  3. Computational tool for simulation of power and refrigeration cycles

    NASA Astrophysics Data System (ADS)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  4. Stack and dump: Peak-power scaling by coherent pulse addition in passive cavities

    NASA Astrophysics Data System (ADS)

    Breitkopf, S.; Eidam, T.; Klenke, A.; Carstens, H.; Holzberger, S.; Fill, E.; Schreiber, T.; Krausz, F.; Tünnermann, A.; Pupeza, I.; Limpert, J.

    2015-10-01

    During the last decades femtosecond lasers have proven their vast benefit in both scientific and technological tasks. Nevertheless, one laser feature bearing the tremendous potential for high-field applications, delivering extremely high peak and average powers simultaneously, is still not accessible. This is the performance regime several upcoming applications such as laser particle acceleration require, and therefore, challenge laser technology to the fullest. On the one hand, some state-of-the-art canonical bulk amplifier systems provide pulse peak powers in the range of multi-terawatt to petawatt. On the other hand, concepts for advanced solid-state-lasers, specifically thin disk, slab or fiber systems have shown their capability of emitting high average powers in the kilowatt range with a high wall-plug-efficiency while maintaining an excellent spatial and temporal quality of the output beam. In this article, a brief introduction to a concept for a compact laser system capable of simultaneously providing high peak and average powers all along with a high wall-plug efficiency will be given. The concept relies on the stacking of a pulse train emitted from a high-repetitive femtosecond laser system in a passive enhancement cavity, also referred to as temporal coherent combining. In this manner, the repetition rate is decreased in favor of a pulse energy enhancement by the same factor while the average power is almost preserved. The key challenge of this concept is a fast, purely reflective switching element that allows for the dumping of the enhanced pulse out of the cavity. Addressing this challenge could, for the first time, allow for the highly efficient extraction of joule-class pulses at megawatt average power levels and thus lead to a whole new area of applications for ultra-fast laser systems.

  5. Designing high power targets with computational fluid dynamics (CFD)

    SciTech Connect

    Covrig, S. D.

    2013-11-07

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  6. Designing high power targets with computational fluid dynamics (CFD)

    SciTech Connect

    Covrig, Silviu D.

    2013-11-01

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 {micro}A rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 {micro}A beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  7. Powering Down from the Bottom up: Greener Client Computing

    ERIC Educational Resources Information Center

    O'Donnell, Tom

    2009-01-01

    A decade ago, people wanting to practice "green computing" recycled their printer paper, turned their personal desktop systems off from time to time, and tried their best to donate old equipment to a nonprofit instead of throwing it away. A campus IT department can shave a few watts off just about any IT process--the real trick is planning and…

  8. Power and execution performance tradeoffs of GPGPU computing: a case study employing stereo matching

    NASA Astrophysics Data System (ADS)

    Arunagiri, Sarala; Jaloma, Jaime; Portillo, Ricardo; Argueta, Arturo

    2013-03-01

    GPGPUs and Multicore processors have become commonplace with their wide usage in traditional high performance computing systems as well as mobile computing devices. A significant speedup can be achieved for a variety of general-purpose applications by using these technologies. Unfortunately, this speedup is often accompanied by high power and/or energy consumption. As a result, energy conservation is increasingly becoming a major concern in designing these computing devices. For large-scale systems such as massive data centers, the cost and environmental impact of powering and cooling computer systems is the main driver for energy-efficiency. On the other hand, for the mobile computing sector, energy conservation is driven by the need to extend battery life and power capping is mandated by the restrictive power budget of mobile platforms such as Unmanned Aerial Vehicles (UAV). Our focus is to understand the power performance tradeoffs in executing Army applications on portable or tactical computing platforms. For a GPGPU computing platform, this study investigates how host processors (CPUs) with different Thermal Design Power (TDP) might affect the execution time and the power consumption of an Army-relevant stereo-matching code accelerated by a GPGPU. For image pairs with size approximately one Megapixel we observed a decrease in execution time of nearly 50% and a decrease in average power by 5% when executed on a low TDP Intel Xeon processor host. The decrease in energy consumption was over 50%. For a larger image pair, although there was no substantial decrease in execution time, there was a decrease in power and energy consumption of approximately 6%. Although we cannot make general conclusions based on a case study, it points to the possibility that for some tactical-HPC GPGPU-accelerated applications, a host processor with a lower TDP might provide better system performance in terms of power consumption while not degrading the execution-time performance.

  9. PowerPoint Presentations: A Creative Addition to the Research Process.

    ERIC Educational Resources Information Center

    Perry, Alan E.

    2003-01-01

    Contends that the requirement of a PowerPoint presentation as part of the research process would benefit students in the following ways: learning how to conduct research; starting their research project sooner; honing presentation and public speaking skills; improving cooperative and social skills; and enhancing technology skills. Outlines the…

  10. Computer program for afterheat temperature distribution for mobile nuclear power plant

    NASA Technical Reports Server (NTRS)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  11. The on-board computer in diagnosis of satellite power unit

    NASA Astrophysics Data System (ADS)

    Bel'giy, V. V.; Bugrovskiy, V. V.; Kovachich, Yu. V.; Petrov, B. N.; Shevyakov, A. A.

    Diagnosis of a space thermoemission power unit incorporating a Topaz type reactor converter is hindered by the low potential of the measurement system. The lack of information is restored by computing from the measurement date. Examples of dynamic mode diagnosis with restoration of information on the field temperature is given. The power unit diagnosis algorithms are implemented in the onboard computer whose power is about 200,000 operations per second. Memory and computing requirements are determined from algorithms of different diagnosis degrees. Results in study of the necessary computer component redundancy are given for different models of system degradation. The redundancy level should insure that the nucleus of the computer system with a minimally necessary 4K-words memory remains in operation after three years into the mission.

  12. Parametric Powered-Lift Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Murman, Scott; Pandya, Shishir; Ahmad, Jasim

    2002-01-01

    The goal of this work is to enable the computation of large numbers of unsteady high-fidelity flow simulations for a YAV-8B Harrier aircraft in ground effect by improving the solution process and taking advantage of NASA parallel supercomputers. The YAV-8B Harrier aircraft can take off and land vertically, or utilize short runways by directing its four exhaust nozzles toward the ground. Transition to forward flight is achieved by rotating these nozzles into a horizontal position.

  13. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. PMID:26829769

  14. MORT: a powerful foundational library for computational biology and CADD

    PubMed Central

    2014-01-01

    Background A foundational library called MORT (Molecular Objects and Relevant Templates) for the development of new software packages and tools employed in computational biology and computer-aided drug design (CADD) is described here. Results MORT contains several advantages compared with the other libraries. Firstly, MORT written in C++ natively supports the paradigm of object-oriented design, and thus it can be understood and extended easily. Secondly, MORT employs the relational model to represent a molecule, and it is more convenient and flexible than the traditional hierarchical model employed by many other libraries. Thirdly, a lot of functions have been included in this library, and a molecule can be manipulated easily at different levels. For example, it can parse a variety of popular molecular formats (MOL/SDF, MOL2, PDB/ENT, SMILES/SMARTS, etc.), create the topology and coordinate files for the simulations supported by AMBER, calculate the energy of a specific molecule based on the AMBER force fields, etc. Conclusions We believe that MORT can be used as a foundational library for programmers to develop new programs and applications for computational biology and CADD. Source code of MORT is available at http://cadd.suda.edu.cn/MORT/index.htm.

  15. Computations on the primary photoreaction of Br2 with CO2: stepwise vs concerted addition of Br atoms.

    PubMed

    Xu, Kewei; Korter, Timothy M; Braiman, Mark S

    2015-04-01

    It was proposed previously that Br2-sensitized photolysis of liquid CO2 proceeds through a metastable primary photoproduct, CO2Br2. Possible mechanisms for such a photoreaction are explored here computationally. First, it is shown that the CO2Br radical is not stable in any geometry. This rules out a free-radical mechanism, for example, photochemical splitting of Br2 followed by stepwise addition of Br atoms to CO2-which in turn accounts for the lack of previously observed Br2+CO2 photochemistry in gas phases. A possible alternative mechanism in liquid phase is formation of a weakly bound CO2:Br2 complex, followed by concerted photoaddition of Br2. This hypothesis is suggested by the previously published spectroscopic detection of a binary CO2:Br2 complex in the supersonically cooled gas phase. We compute a global binding-energy minimum of -6.2 kJ mol(-1) for such complexes, in a linear geometry. Two additional local minima were computed for perpendicular (C2v) and nearly parallel asymmetric planar geometries, both with binding energies near -5.4 kJ mol(-1). In these two latter geometries, C-Br and O-Br bond distances are simultaneously in the range of 3.5-3.8 Å, that is, perhaps suitable for a concerted photoaddition under the temperature and pressure conditions where Br2 + CO2 photochemistry has been observed. PMID:25767936

  16. Proceedings: Workshop on advanced mathematics and computer science for power systems analysis

    SciTech Connect

    Esselman, W.H.; Iveson, R.H. )

    1991-08-01

    The Mathematics and Computer Workshop on Power System Analysis was held February 21--22, 1989, in Palo Alto, California. The workshop was the first in a series sponsored by EPRI's Office of Exploratory Research as part of its effort to develop ways in which recent advances in mathematics and computer science can be applied to the problems of the electric utility industry. The purpose of this workshop was to identify research objectives in the field of advanced computational algorithms needed for the application of advanced parallel processing architecture to problems of power system control and operation. Approximately 35 participants heard six presentations on power flow problems, transient stability, power system control, electromagnetic transients, user-machine interfaces, and database management. In the discussions that followed, participants identified five areas warranting further investigation: system load flow analysis, transient power and voltage analysis, structural instability and bifurcation, control systems design, and proximity to instability. 63 refs.

  17. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  18. Energy Use and Power Levels in New Monitors and Personal Computers

    SciTech Connect

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  19. Evaluation of Different Power of Near Addition in Two Different Multifocal Intraocular Lenses

    PubMed Central

    Unsal, Ugur; Baser, Gonen

    2016-01-01

    Purpose. To compare near, intermediate, and distance vision and quality of vision, when refractive rotational multifocal intraocular lenses with 3.0 diopters or diffractive multifocal intraocular lenses with 2.5 diopters near addition are implanted. Methods. 41 eyes of 41 patients in whom rotational +3.0 diopters near addition IOLs were implanted and 30 eyes of 30 patients in whom diffractive +2.5 diopters near addition IOLs were implanted after cataract surgery were reviewed. Uncorrected and corrected distance visual acuity, intermediate visual acuity, near visual acuity, and patient satisfaction were evaluated 6 months later. Results. The corrected and uncorrected distance visual acuity were the same between both groups (p = 0.50 and p = 0.509, resp.). The uncorrected intermediate and corrected intermediate and near vision acuities were better in the +2.5 near vision added intraocular lens implanted group (p = 0.049, p = 0.005, and p = 0.001, resp.) and the uncorrected near vision acuity was better in the +3.0 near vision added intraocular lens implanted group (p = 0.001). The patient satisfactions of both groups were similar. Conclusion. The +2.5 diopters near addition could be a better choice in younger patients with more distance and intermediate visual requirements (driving, outdoor activities), whereas the + 3.0 diopters should be considered for patients with more near vision correction (reading). PMID:27340560

  20. CIDER: Enabling Robustness-Power Tradeoffs on a Computational Eyeglass

    PubMed Central

    Mayberry, Addison; Tun, Yamin; Hu, Pan; Smith-Freedman, Duncan; Ganesan, Deepak; Marlin, Benjamin; Salthouse, Christopher

    2016-01-01

    The human eye offers a fascinating window into an individual’s health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. The challenges lie in: a) handling the complexity of continuous high-rate sensing from a camera and processing the image stream to estimate eye parameters, and b) dealing with the wide variability in illumination conditions in the natural environment. This paper explores the power–robustness tradeoffs inherent in the design of a wearable eye tracker, and proposes a novel staged architecture that enables graceful adaptation across the spectrum of real-world illumination. We propose CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared, b) error in estimating pupil center and pupil dilation, and c) model training procedures that involve zero effort from a user. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz. PMID:27042165

  1. Computer simulation of magnetization-controlled shunt reactors for calculating electromagnetic transients in power systems

    SciTech Connect

    Karpov, A. S.

    2013-01-15

    A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.

  2. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  3. Optimization of fluid line sizes with pumping power penalty IBM-360 computer program

    NASA Technical Reports Server (NTRS)

    Jelinek, D.

    1972-01-01

    Computer program has been developed to calculate and total weights for tubing, fluid in tubing, and weight of fuel cell power source necessary to power pump based on flow rate and pressure drop. Program can be used for fluid systems used in any type of aircraft, spacecraft, trucks, ships, refineries, and chemical processing plants.

  4. Negative capacitance for ultra-low power computing

    NASA Astrophysics Data System (ADS)

    Khan, Asif Islam

    Owing to the fundamental physics of the Boltzmann distribution, the ever-increasing power dissipation in nanoscale transistors threatens an end to the almost-four-decade-old cadence of continued performance improvement in complementary metal-oxide-semiconductor (CMOS) technology. It is now agreed that the introduction of new physics into the operation of field-effect transistors---in other words, "reinventing the transistor'"--- is required to avert such a bottleneck. In this dissertation, we present the experimental demonstration of a novel physical phenomenon, called the negative capacitance effect in ferroelectric oxides, which could dramatically reduce power dissipation in nanoscale transistors. It was theoretically proposed in 2008 that by introducing a ferroelectric negative capacitance material into the gate oxide of a metal-oxide-semiconductor field-effect transistor (MOSFET), the subthreshold slope could be reduced below the fundamental Boltzmann limit of 60 mV/dec, which, in turn, could arbitrarily lower the power supply voltage and the power dissipation. The research presented in this dissertation establishes the theoretical concept of ferroelectric negative capacitance as an experimentally verified fact. The main results presented in this dissertation are threefold. To start, we present the first direct measurement of negative capacitance in isolated, single crystalline, epitaxially grown thin film capacitors of ferroelectric Pb(Zr0.2Ti0.8)O3. By constructing a simple resistor-ferroelectric capacitor series circuit, we show that, during ferroelectric switching, the ferroelectric voltage decreases, while the stored charge in it increases, which directly shows a negative slope in the charge-voltage characteristics of a ferroelectric capacitor. Such a situation is completely opposite to what would be observed in a regular resistor-positive capacitor series circuit. This measurement could serve as a canonical test for negative capacitance in any novel

  5. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    PubMed Central

    Taylor, Douglas J.; Muller, Keith E.

    2013-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272

  6. The effectiveness of power-generating complexes constructed on the basis of nuclear power plants combined with additional sources of energy determined taking risk factors into account

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.

    2015-02-01

    The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.

  7. Carbon monoxide exposures from propane-powered floor burnishers following addition of emissions controls

    SciTech Connect

    Demer, F.R.

    1998-11-01

    Previous published work by this author suggests that propane-powered floor burnisher use represents a potentially serious health hazard from carbon monoxide exposures, particularly for susceptible individuals. This earlier study was repeated using burnishers retrofitted with emission controls consisting of self-aspirating catalytic mufflers and computerized air/fuel monitors and alarms. Real-time carbon monoxide detectors with data-logging capabilities were placed on the burnishers in the breathing zones of operators during burnisher use. Carbon monoxide levels were recorded every 30 seconds. Ventilation and physical characteristics of the spaces of burnisher use were characterized, as were burnisher maintenance practices. Thirteen burnishing events were monitored under conditions comparable to previously published monitoring. All carbon monoxide exposures were well below even the most conservative recommended limits from the American Conference of Governmental Industrial Hygienists. Potential failures of the emission controls were also identified and included air filter blockage, spark plug malfunction, and faulty alarm function design.

  8. Accurate Computation of Gaussian Quadrature for Tension Powers

    NASA Astrophysics Data System (ADS)

    Singer, Saša

    2007-09-01

    We consider Gaussian quadrature formulæ which exactly integrate a system of tension powers 1,x,x2,…,xn-3, sinh(px), cosh(px), on a given interval [a,b], where n⩾4 is an even integer and p>0 is a given tension parameter. In some applications it is essential that p can be changed dynamically, and we need an efficient "on-demand" algorithm that calculates the nodes and weights of Gaussian quadrature formulas for many different values of p, which are not known in advance. It is an interesting numerical challenge to achieve the required full machine precision accuracy in such an algorithm, for all possible values of p. By exploiting various analytic and numerical techniques, we show that this can be done efficiently for all reasonably low values of n that are of any practical importance.

  9. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  10. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  11. Evaluation of computer-aided design and drafting for the electric power industry. Final report

    SciTech Connect

    Anuskiewicz, T.; Barduhn, G.; Lowther, B.; Osman, I.

    1984-01-01

    This report reviews current and future computer-aided design and drafting (CADD) technology relative to utility needs and to identify useful development projects that may be undertaken by EPRI. The principal conclusions are that computer aids offer substantial cost and time savings and that computer systems are being developed to take advantage of the savings. Data bases are not available for direct communication between computers used by the power industry and will limit benefits to the industry. Recommendations are made for EPRI to take the initiative to develop the data bases for direct communication between power industry computers and to research, develop, and demonstrate new applications within the industry. Key components of a CADD system are described. The state of the art of two- and three-dimensional CADD systems to perform graphics and project management control functions are assessed. Comparison is made of three-dimensional electronic models and plastic models.

  12. Biologically relevant molecular transducer with increased computing power and iterative abilities.

    PubMed

    Ratner, Tamar; Piran, Ron; Jonoska, Natasha; Keinan, Ehud

    2013-05-23

    As computing devices, which process data and interconvert information, transducers can encode new information and use their output for subsequent computing, offering high computational power that may be equivalent to a universal Turing machine. We report on an experimental DNA-based molecular transducer that computes iteratively and produces biologically relevant outputs. As a proof of concept, the transducer accomplished division of numbers by 3. The iterative power was demonstrated by a recursive application on an obtained output. This device reads plasmids as input and processes the information according to a predetermined algorithm, which is represented by molecular software. The device writes new information on the plasmid using hardware that comprises DNA-manipulating enzymes. The computation produces dual output: a quotient, represented by newly encoded DNA, and a remainder, represented by E. coli phenotypes. This device algorithmically manipulates genetic codes. PMID:23706637

  13. Application of modern computer technology to EPRI (Electric Power Research Institute) nuclear computer programs: Final report

    SciTech Connect

    Feinauer, L.R.

    1989-08-01

    Many of the nuclear analysis programs in use today were designed and developed well over a decade ago. Within this time frame, tremendous changes in hardware and software technologies have made it necessary to revise and/or restructure most of the analysis programs to take advantage of these changes. As computer programs mature from the development phase to being production programs, program maintenance and portability become very important issues. The maintenance costs associated with a particular computer program can generally be expected to exceed the total development costs by as much as a factor of two. Many of the problems associated with high maintenance costs can be traced back to either poorly designed coding structure, or ''quick fix'' modifications which do not preserve the original coding structure. The lack of standardization between hardware designs presents an obstacle to the software designer in providing 100% portable coding; however, conformance to certain guidelines can ensure portability between a wide variety of machines and operating systems. This report presents guidelines for upgrading EPRI nuclear computer programs to conform to current programming standards while maintaining flexibility for accommodating future hardware and software design trends. Guidelines for development of new computer programs are also presented. 22 refs., 10 figs.

  14. Effect of ferrite addition above the base ferrite on the coupling factor of wireless power transfer for vehicle applications

    NASA Astrophysics Data System (ADS)

    Batra, T.; Schaltz, E.; Ahn, S.

    2015-05-01

    Power transfer capability of wireless power transfer systems is highly dependent on the magnetic design of the primary and secondary inductors and is measured quantitatively by the coupling factor. The inductors are designed by placing the coil over a ferrite base to increase the coupling factor and reduce magnetic emissions to the surroundings. Effect of adding extra ferrite above the base ferrite at different physical locations on the self-inductance, mutual inductance, and coupling factor is under investigation in this paper. The addition can increase or decrease the mutual inductance depending on the placement of ferrite. Also, the addition of ferrite increases the self-inductance of the coils, and there is a probability for an overall decrease in the coupling factor. Correct placement of ferrite, on the other hand, can increase the coupling factor relatively higher than the base ferrite as it is closer to the other inductor. Ferrite being a heavy compound of iron increases the inductor weight significantly and needs to be added judiciously. Four zones have been identified in the paper, which shows different sensitivity to addition of ferrite in terms of the two inductances and coupling factor. Simulation and measurement results are presented for different air gaps between the coils and at different gap distances between the ferrite base and added ferrite. This paper is beneficial in improving the coupling factor while adding minimum weight to wireless power transfer system.

  15. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes.

    PubMed

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  16. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    NASA Astrophysics Data System (ADS)

    Singh, Arvinder; Chandra, Amreesh

    2016-05-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs.

  17. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  18. Characterization of Steel-Ta Dissimilar Metal Builds Made Using Very High Power Ultrasonic Additive Manufacturing (VHP-UAM)

    NASA Astrophysics Data System (ADS)

    Sridharan, Niyanth; Norfolk, Mark; Babu, Sudarsanam Suresh

    2016-05-01

    Ultrasonic additive manufacturing is a solid-state additive manufacturing technique that utilizes ultrasonic vibrations to bond metal tapes into near net-shaped components. The major advantage of this process is the ability to manufacture layered structures with dissimilar materials without any intermetallic formation. Majority of the published literature had focused only on the bond formation mechanism in Aluminum alloys. The current work pertains to explain the microstructure evolution during dissimilar joining of iron and tantalum using very high power ultrasonic additive manufacturing and characterization of the interfaces using electron back-scattered diffraction and Nano-indentation measurement. The results showed extensive grain refinement at the bonded interfaces of these metals. This phenomenon was attributed to continuous dynamic recrystallization process driven by the high strain rate plastic deformation and associated adiabatic heating that is well below 50 pct of melting point of both iron and Ta.

  19. Turbulence computations with 3-D small-scale additive turbulent decomposition and data-fitting using chaotic map combinations

    SciTech Connect

    Mukerji, S.

    1997-12-31

    Although the equations governing turbulent fluid flow, the Navier-Stokes (N.S.) equations, have been known for well over a century and there is a clear technological necessity in obtaining solutions to these equations, turbulence remains one of the principal unsolved problems in physics today. It is still not possible to make accurate quantitative predictions about turbulent flows without relying heavily on empirical data. In principle, it is possible to obtain turbulent solutions from a direct numerical simulation (DNS) of the N.-S. equations. The author first provides a brief introduction to the dynamics of turbulent flows. The N.-S. equations which govern fluid flow, are described thereafter. Then he gives a brief overview of DNS calculations and where they stand at present. He next introduces the two most popular approaches for doing turbulent computations currently in use, namely, the Reynolds averaging of the N.-S. equations (RANS) and large-eddy simulation (LES). Approximations, often ad hoc ones, are present in these methods because use is made of heuristic models for turbulence quantities (the Reynolds stresses) which are otherwise unknown. They then introduce a new computational method called additive turbulent decomposition (ATD), the small-scale version of which is the topic of this research. The rest of the thesis is organized as follows. In Chapter 2 he describes the ATD procedure in greater detail; how dependent variables are split and the decomposition into large- and small-scale sets of equations. In Chapter 3 the spectral projection of the small-scale momentum equations are derived in detail. In Chapter 4 results of the computations with the small-scale ATD equations are presented. In Chapter 5 he describes the data-fitting procedure which can be used to directly specify the parameters of a chaotic-map turbulence model.

  20. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  1. Power levels in office equipment: Measurements of new monitors and personal computers

    SciTech Connect

    Roberson, Judy A.; Brown, Richard E.; Nordman, Bruce; Webber, Carrie A.; Homan, Gregory H.; Mahajan, Akshay; McWhinney, Marla; Koomey, Jonathan G.

    2002-05-14

    Electronic office equipment has proliferated rapidly over the last twenty years and is projected to continue growing in the future. Efforts to reduce the growth in office equipment energy use have focused on power management to reduce power consumption of electronic devices when not being used for their primary purpose. The EPA ENERGY STAR[registered trademark] program has been instrumental in gaining widespread support for power management in office equipment, and accurate information about the energy used by office equipment in all power levels is important to improving program design and evaluation. This paper presents the results of a field study conducted during 2001 to measure the power levels of new monitors and personal computers. We measured off, on, and low-power levels in about 60 units manufactured since July 2000. The paper summarizes power data collected, explores differences within the sample (e.g., between CRT and LCD monitors), and discusses some issues that arise in m etering office equipment. We also present conclusions to help improve the success of future power management programs.Our findings include a trend among monitor manufacturers to provide a single very low low-power level, and the need to standardize methods for measuring monitor on power, to more accurately estimate the annual energy consumption of office equipment, as well as actual and potential energy savings from power management.

  2. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  3. Computational Research Challenges and Opportunities for the Optimization of Fossil Energy Power Generation System

    SciTech Connect

    Zitney, S.E.

    2007-06-01

    Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.

  4. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  5. Linking process, structure, property, and performance for metal-based additive manufacturing: computational approaches with experimental support

    NASA Astrophysics Data System (ADS)

    Smith, Jacob; Xiong, Wei; Yan, Wentao; Lin, Stephen; Cheng, Puikei; Kafka, Orion L.; Wagner, Gregory J.; Cao, Jian; Liu, Wing Kam

    2016-04-01

    Additive manufacturing (AM) methods for rapid prototyping of 3D materials (3D printing) have become increasingly popular with a particular recent emphasis on those methods used for metallic materials. These processes typically involve an accumulation of cyclic phase changes. The widespread interest in these methods is largely stimulated by their unique ability to create components of considerable complexity. However, modeling such processes is exceedingly difficult due to the highly localized and drastic material evolution that often occurs over the course of the manufacture time of each component. Final product characterization and validation are currently driven primarily by experimental means as a result of the lack of robust modeling procedures. In the present work, the authors discuss primary detrimental hurdles that have plagued effective modeling of AM methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles. The primary focus of this work encompasses the specific areas of high-performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.

  6. Thread selection according to predefined power characteristics during context switching on compute nodes

    DOEpatents

    None

    2013-06-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  7. GLIMMPSE: Online Power Computation for Linear Models with and without a Baseline Covariate.

    PubMed

    Kreidler, Sarah M; Muller, Keith E; Grunwald, Gary K; Ringham, Brandy M; Coker-Dukowitz, Zacchary T; Sakhadeo, Uttara R; Barón, Anna E; Glueck, Deborah H

    2013-09-01

    GLIMMPSE is a free, web-based software tool that calculates power and sample size for the general linear multivariate model with Gaussian errors (http://glimmpse.SampleSizeShop.org/). GLIMMPSE provides a user-friendly interface for the computation of power and sample size. We consider models with fixed predictors, and models with fixed predictors and a single Gaussian covariate. Validation experiments demonstrate that GLIMMPSE matches the accuracy of previously published results, and performs well against simulations. We provide several online tutorials based on research in head and neck cancer. The tutorials demonstrate the use of GLIMMPSE to calculate power and sample size. PMID:24403868

  8. New methods for computing a closest saddle node bifurcation and worst case load power margin for voltage collapse

    SciTech Connect

    Dobson, I. ); Lu, Liming )

    1993-08-01

    Voltage collapse and blackout can occur in an electric power system when load powers vary so that the system loses stability in a saddle node bifurcation. The authors propose new iterative and direct methods to compute load powers at which bifurcation occurs and which are locally closest to the current operating load powers. The distance in load power parameter space to this locally closest bifurcation is an index of voltage collapse. The pattern of load power increase need not be predicted; instead the index is a worst case load power margin. The computations are illustrated in the 6 dimensional load power parameter space of a 5 bus power system. The normal vector and curvature of a hypersurface of critical load powers at which bifurcation occurs are also computed. The sensitivity of the index to parameters and controls is easily obtained from the normal vector.

  9. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  10. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  11. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Klimeck, G.; Hanks, D.

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.

  12. Current status and future trends in computer modeling of high-power travelling-wave tubes

    SciTech Connect

    DeHope, W.J.

    1996-12-31

    The interaction of a slow electromagnetic wave and a linear propagating electron stream has been utilized for many years for microwave amplification. Pulsed devices of high peak and average power typically are based on periodic, filter-type circuits and interaction takes place on the first forward-wave branch of a fundamental backward-wave dispersion curve. These devices have served as useful test vehicles over the years in the development of advanced computational methods and models. A working relationship has thereby developed between the plasma computation community and the microwave tube industry. The talk will describe the operational principles and design steps in modern, high-power TWT development. The major computational stages that the industry has seen over the last four decades in both 2-d and 3-d modeling will be reviewed and comments made on their relevancy to current work and future trends.

  13. Computed lateral power spectral density response of conventional and STOL airplanes to random atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1974-01-01

    A method of computing the power spectral densities of the lateral response of airplanes to random atmospheric turbulence was adapted to an electronic digital computer. By use of this program, the power spectral densities of the lateral roll, yaw, and sideslip angular displacement of several conventional and STOL airplanes were computed. The results show that for the conventional airplanes, the roll response is more prominent than that for yaw or sideslip response. For the STOL airplanes, on the other hand, the yaw and sideslip responses were larger than the roll response. The response frequency of the STOL airplanes generally is higher than that for the conventional airplanes. This combination of greater sensitivity of the STOL airplanes in yaw and sideslip and the frequency at which they occur could be a factor causing the poor riding qualities of this class of airplanes.

  14. Computer-based procedure for field activities: Results from three evaluations at nuclear power plants

    SciTech Connect

    Oxstrand, Johanna; bly, Aaron; LeBlanc, Katya

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  15. Computer models and simulations of IGCC power plants with Canadian coals

    SciTech Connect

    Zheng, L.; Furimsky, E.

    1999-07-01

    In this paper, three steady state computer models for simulation of IGCC power plants with Shell, Texaco and BGL (British Gas Lurgi) gasifiers will be presented. All models were based on a study by Bechtel for Nova Scotia Power Corporation. They were built by using Advanced System for Process Engineering (ASPEN) steady state simulation software together with Fortran programs developed in house. Each model was integrated from several sections which can be simulated independently, such as coal preparation, gasification, gas cooling, acid gas removing, sulfur recovery, gas turbine, heat recovery steam generation, and steam cycle. A general description of each process, model's overall structure, capability, testing results, and background reference will be given. The performance of some Canadian coals on these models will be discussed as well. The authors also built a computer model of IGCC power plant with Kellogg-Rust-Westinghouse gasifier, however, due to limitation of paper length, it is not presented here.

  16. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  17. The Meaning and Computation of Causal Power: Comment on Cheng (1997) and Novick and Cheng (2004)

    PubMed Central

    Luhmann, Christian C.; Ann, Woo-kyoung

    2009-01-01

    D. Hume (1739/1987) argued that causality is not observable. P. W. Cheng (1997) claimed to present “a theoretical solution to the problem of causal induction first posed by Hume more than two and a half centuries ago” (p. 398) in the form of the power PC theory (L. R. Novick & P. W. Cheng, 2004). This theory claims that people's goal in causal induction is to estimate causal powers from observable covariation and outlines how this can be done in specific conditions. The authors first demonstrate that if the necessary assumptions were ever met, causal powers would be self-evident to a reasoner—they are either 0 or 1—making the theory unnecessary. The authors further argue that the assumptions the power PC theory requires to compute causal power are unobtainable in the real world and, furthermore, people are aware that requisite assumptions are violated. Therefore, the authors argue that people do not attempt to compute causal power. PMID:16060763

  18. System and method for controlling power consumption in a computer system based on user satisfaction

    SciTech Connect

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  19. Power and Performance Management in Nonlinear Virtualized Computing Systems via Predictive Control.

    PubMed

    Wen, Chengjian; Mu, Yifen

    2015-01-01

    The problem of power and performance management captures growing research interest in both academic and industrial field. Virtulization, as an advanced technology to conserve energy, has become basic architecture for most data centers. Accordingly, more sophisticated and finer control are desired in virtualized computing systems, where multiple types of control actions exist as well as time delay effect, which make it complicated to formulate and solve the problem. Furthermore, because of improvement on chips and reduction of idle power, power consumption in modern machines shows significant nonlinearity, making linear power models(which is commonly adopted in previous work) no longer suitable. To deal with this, we build a discrete system state model, in which all control actions and time delay effect are included by state transition and performance and power can be defined on each state. Then, we design the predictive controller, via which the quadratic cost function integrating performance and power can be dynamically optimized. Experiment results show the effectiveness of the controller. By choosing a moderate weight, a good balance can be achieved between performance and power: 99.76% requirements can be dealt with and power consumption can be saved by 33% comparing to the case with open loop controller. PMID:26225769

  20. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    PubMed

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images. PMID:7776546

  1. Power and Performance Management in Nonlinear Virtualized Computing Systems via Predictive Control

    PubMed Central

    Wen, Chengjian; Mu, Yifen

    2015-01-01

    The problem of power and performance management captures growing research interest in both academic and industrial field. Virtulization, as an advanced technology to conserve energy, has become basic architecture for most data centers. Accordingly, more sophisticated and finer control are desired in virtualized computing systems, where multiple types of control actions exist as well as time delay effect, which make it complicated to formulate and solve the problem. Furthermore, because of improvement on chips and reduction of idle power, power consumption in modern machines shows significant nonlinearity, making linear power models(which is commonly adopted in previous work) no longer suitable. To deal with this, we build a discrete system state model, in which all control actions and time delay effect are included by state transition and performance and power can be defined on each state. Then, we design the predictive controller, via which the quadratic cost function integrating performance and power can be dynamically optimized. Experiment results show the effectiveness of the controller. By choosing a moderate weight, a good balance can be achieved between performance and power: 99.76% requirements can be dealt with and power consumption can be saved by 33% comparing to the case with open loop controller. PMID:26225769

  2. Computation of the Mutual Inductance between Air-Cored Coils of Wireless Power Transformer

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    Wireless power transfer system is a modern technology which allows the transfer of electric power between the air-cored coils of its transformer via high frequency magnetic fields. However, due to its coil separation distance and misalignment, maximum power transfer is not guaranteed. Based on a more efficient and general model available in the literature, rederived mathematical models for evaluating the mutual inductance between circular coils with and without lateral and angular misalignment are presented. Rather than presenting results numerically, the computed results are graphically implemented using MATLAB codes. The results are compared with the published ones and clarification regarding the errors made are presented. In conclusion, this study shows that power transfer efficiency of the system can be improved if a higher frequency alternating current is supplied to the primary coil, the reactive parts of the coils are compensated with capacitors and ferrite cores are added to the coils.

  3. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following

  4. Building ceramics with an addition of pulverized combustion fly ash from the thermal power plant Nováky

    NASA Astrophysics Data System (ADS)

    Húlan, Tomáš; Trník, Anton; Medved, Igor; Štubňa, Igor; Kaljuvee, Tiit

    2016-07-01

    Pulverized combustion fly ash (PFA) from the Power plant Nováky (Slovakia) is analyzed for its potential use in the production of building ceramics. Three materials are used to prepare the mixtures: illite-rich clay (IRC), PFA and IRC fired at 1000 °C (called grog). The mixtures contain 60 % of IRC and 40 % of a non-plastic compound (grog or PFA). A various amount of the grog is replaced by PFA and the effect of this substitution is studied. Thermal analyses (TGA, DTA, thermodilatometry, and dynamical thermomechanical analysis) are used to analyze the processes occurring during firing. The flexural strength and thermal conductivity are determined at room temperature after firing in the temperature interval from 800 to 1100 °C. The results show that an addition of PFA slightly decreases the flexural strength. The thermal conductivity and porosity are practically unaffected by the presence of PFA. Thus, PFA from the Power plant Nováky is a convenient non-plastic component for manufacturing building ceramics.

  5. Optimal welding parameters for very high power ultrasonic additive manufacturing of smart structures with aluminum 6061 matrix

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.; Hehr, Adam; Dapino, Marcelo J.

    2014-03-01

    Ultrasonic additive manufacturing (UAM) is a recent solid state manufacturing process that combines ad- ditive joining of thin metal tapes with subtractive milling operations to generate near net shape metallic parts. Due to the minimal heating during the process, UAM is a proven method of embedding Ni-Ti, Fe-Ga, and PVDF to create active metal matrix composites. Recently, advances in the UAM process utilizing 9 kW very high power (VHP) welding has improved bonding properties, enabling joining of high strength materials previously unweldable with 1 kW low power UAM. Consequently, a design of experiments study was conducted to optimize welding conditions for aluminum 6061 components. This understanding is critical in the design of UAM parts containing smart materials. Build parameters, including weld force, weld speed, amplitude, and temperature were varied based on a Taguchi experimental design matrix and tested for me- chanical strength. Optimal weld parameters were identi ed with statistical methods including a generalized linear model for analysis of variance (ANOVA), mean e ects plots, and interaction e ects plots.

  6. The Effect of Emphasizing Mathematical Structure in the Acquisition of Whole Number Computation Skills (Addition and Subtraction) By Seven- and Eight-Year Olds: A Clinical Investigation.

    ERIC Educational Resources Information Center

    Uprichard, A. Edward; Collura, Carolyn

    This investigation sought to determine the effect of emphasizing mathematical structure in the acquisition of computational skills by seven- and eight-year-olds. The meaningful development-of-structure approach emphasized closure, commutativity, associativity, and the identity element of addition; the inverse relationship between addition and…

  7. Computer-aided optimization of grid design for high-power lead-acid batteries

    NASA Astrophysics Data System (ADS)

    Yamada, Keizo; Maeda, Ken-ichi; Sasaki, Kazuya; Hirasawa, Tokiyoshi

    Several high-power lead-acid batteries have been developed for automotive applications. A computer-aided optimization (CAO) technique has been used to obtain a low-resistance grid design. Unlike conventional computer simulation, the CAO technique does not require an unduly large number of designs to yield a good result. After introducing a pair of differential equations that are expected to be valid for the optimized design, the grid thickness is optimized by solving the boundary value problem of coupled differential equations. When applied for the grids of JIS B-size batteries, this technique reduces the potential drop of electrical resistance in a electrode by 11-14%.

  8. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  9. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  10. Adaptive controller for dynamic power and performance management in the virtualized computing systems.

    PubMed

    Wen, Chengjian; Long, Xiang; Mu, Yifen

    2013-01-01

    Power and performance management problem in large scale computing systems like data centers has attracted a lot of interests from both enterprises and academic researchers as power saving has become more and more important in many fields. Because of the multiple objectives, multiple influential factors and hierarchical structure in the system, the problem is indeed complex and hard. In this paper, the problem will be investigated in a virtualized computing system. Specifically, it is formulated as a power optimization problem with some constraints on performance. Then, the adaptive controller based on least-square self-tuning regulator(LS-STR) is designed to track performance in the first step; and the resource solved by the controller is allocated in order to minimize the power consumption as the second step. Some simulations are designed to test the effectiveness of this method and to compare it with some other controllers. The simulation results show that the adaptive controller is generally effective: it is applicable for different performance metrics, for different workloads, and for single and multiple workloads; it can track the performance requirement effectively and save the power consumption significantly. PMID:23451241

  11. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... identification as Draft Regulatory Guide, DG-1208 on August 22, 2012 (77 FR 50722) for a 60-day public comment... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.''...

  12. 78 FR 47805 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ... issued with a temporary identification as Draft Regulatory Guide, DG-1207 on August 22, 2012 (77 FR 50720... COMMISSION Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.''...

  13. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  14. Accelerating the Gauss-Seidel Power Flow Solver on a High Performance Reconfigurable Computer

    SciTech Connect

    Byun, Jong-Ho; Ravindran, Arun; Mukherjee, Arindam; Joshi, Bharat; Chassin, David P.

    2009-09-01

    The computationally intensive power flow problem determines the voltage magnitude and phase angle at each bus in a power system for hundreds of thousands of buses under balanced three-phase steady-state conditions. We report an FPGA acceleration of the Gauss-Seidel based power flow solver employed in the transmission module of the GridLAB-D power distribution simulator and analysis tool. The prototype hardware is implemented on an SGI Altix-RASC system equipped with a Xilinx Virtex II 6000 FPGA. Due to capacity limitations of the FPGA, only the bus voltage calculations of the power network are implemented on hardware while the branch current calculations are implemented in software. For a 200,000 bus system, the bus voltage calculation on the FPGA achieves a 48x speed-up with PQ buses and a 62 times for PV over an equivalent sequential software implementation. The average overall speed up of the FPGA-CPU implementation with 100 iterations of the Gauss-Seidel power solver is 2.6x over a software implementation, with the branch calculations on the CPU accounting for 85% of the total execution time. The FPGA-CPU implementation also shows linear scaling with increase in the size of the input power network.

  15. Analysis of contingency tables based on generalised median polish with power transformations and non-additive models.

    PubMed

    Klawonn, Frank; Jayaram, Balasubramaniam; Crull, Katja; Kukita, Akiko; Pessler, Frank

    2013-01-01

    Contingency tables are a very common basis for the investigation of effects of different treatments or influences on a disease or the health state of patients. Many journals put a strong emphasis on p-values to support the validity of results. Therefore, even small contingency tables are analysed by techniques like t-test or ANOVA. Both these concepts are based on normality assumptions for the underlying data. For larger data sets, this assumption is not so critical, since the underlying statistics are based on sums of (independent) random variables which can be assumed to follow approximately a normal distribution, at least for a larger number of summands. But for smaller data sets, the normality assumption can often not be justified. Robust methods like the Wilcoxon-Mann-Whitney-U test or the Kruskal-Wallis test do not lead to statistically significant p-values for small samples. Median polish is a robust alternative to analyse contingency tables providing much more insight than just a p-value. Median polish is a technique that provides more information than just a p-value. It explains the contingency table in terms of an overall effect, row and columns effects and residuals. The underlying model for median polish is an additive model which is sometimes too restrictive. In this paper, we propose two related approach to generalise median polish. A power transformation can be applied to the values in the table, so that better results for median polish can be achieved. We propose a graphical method how to find a suitable power transformation. If the original data should be preserved, one can apply other transformations - based on so-called additive generators - that have an inverse transformation. In this way, median polish can be applied to the original data, but based on a non-additive model. The non-linearity of such a model can also be visualised to better understand the joint effects of rows and columns in a contingency table. PMID:25825662

  16. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-05-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.

  17. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    SciTech Connect

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.

  18. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  19. A 10-kW SiC Inverter with A Novel Printed Metal Power Module With Integrated Cooling Using Additive Manufacturing

    SciTech Connect

    Chinthavali, Madhu Sudhan; Ayers, Curtis William; Campbell, Steven L; Wiles, Randy H; Ozpineci, Burak

    2014-01-01

    With efforts to reduce the cost, size, and thermal management systems for the power electronics drivetrain in hybrid electric vehicles (HEVs) and plug-in hybrid electric vehicles (PHEVs), wide band gap semiconductors including silicon carbide (SiC) have been identified as possibly being a partial solution. This paper focuses on the development of a 10-kW all SiC inverter using a high power density, integrated printed metal power module with integrated cooling using additive manufacturing techniques. This is the first ever heat sink printed for a power electronics application. About 50% of the inverter was built using additive manufacturing techniques.

  20. Reliable ISR algorithms for a very-low-power approximate computer

    NASA Astrophysics Data System (ADS)

    Eaton, Ross S.; McBride, Jonah C.; Bates, Joseph

    2013-05-01

    The Office of Naval Research (ONR) is looking for methods to perform higher levels of sensor processing onboard UAVs to alleviate the need to transmit full motion video to ground stations over constrained data links. Charles River Analytics is particularly interested in performing intelligence, surveillance, and reconnaissance (ISR) tasks using UAV sensor feeds. Computing with approximate arithmetic can provide 10,000x improvement in size, weight, and power (SWAP) over desktop CPUs, thereby enabling ISR processing onboard small UAVs. Charles River and Singular Computing are teaming on an ONR program to develop these low-SWAP ISR capabilities using a small, low power, single chip machine, developed by Singular Computing, with many thousands of cores. Producing reliable results efficiently on massively parallel approximate machines requires adapting the core kernels of algorithms. We describe a feature-aided tracking algorithm adapted for the novel hardware architecture, which will be suitable for use onboard a UAV. Tests have shown the algorithm produces results equivalent to state-of-the-art traditional approaches while achieving a 6400x improvement in speed/power ratio.

  1. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    SciTech Connect

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  2. Stellar wind-magnetosphere interaction at exoplanets: computations of auroral radio powers

    NASA Astrophysics Data System (ADS)

    Nichols, J. D.; Milan, S. E.

    2016-09-01

    We present calculations of the auroral radio powers expected from exoplanets with magnetospheres driven by an Earth-like magnetospheric interaction with the solar wind. Specifically, we compute the twin cell-vortical ionospheric flows, currents, and resulting radio powers resulting from a Dungey cycle process driven by dayside and nightside magnetic reconnection, as a function of planetary orbital distance and magnetic field strength. We include saturation of the magnetospheric convection, as observed at the terrestrial magnetosphere, and we present power-law approximations for the convection potentials, radio powers and spectral flux densities. We specifically consider a solar-age system and a young (1 Gyr) system. We show that the radio power increases with magnetic field strength for magnetospheres with saturated convection potential, and broadly decreases with increasing orbital distance. We show that the magnetospheric convection at hot Jupiters will be saturated, and thus unable to dissipate the full available incident Poynting flux, such that the magnetic Radiometric Bode's Law (RBL) presents a substantial overestimation of the radio powers for hot Jupiters. Our radio powers for hot Jupiters are ˜5-1300 TW for hot Jupiters with field strengths of 0.1-10 BJ orbiting a Sun-like star, while we find that competing effects yield essentially identical powers for hot Jupiters orbiting a young Sun-like star. However, in particular, for planets with weaker magnetic fields, our powers are higher at larger orbital distances than given by the RBL, and there are many configurations of planet that are expected to be detectable using SKA.

  3. Computer Assisted Fluid Power Instruction: A Comparison of Hands-On and Computer-Simulated Laboratory Experiences for Post-Secondary Students

    ERIC Educational Resources Information Center

    Wilson, Scott B.

    2005-01-01

    The primary purpose of this study was to examine the effectiveness of utilizing a combination of lecture and computer resources to train personnel to assume roles as hydraulic system technicians and specialists in the fluid power industry. This study compared computer simulated laboratory instruction to traditional hands-on laboratory instruction,…

  4. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  5. Measured energy savings and performance of power-managed personal computers and monitors

    SciTech Connect

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two other research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.

  6. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  7. High accuracy digital image correlation powered by GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Zhang, Lingqi; Wang, Tianyi; Jiang, Zhenyu; Kemao, Qian; Liu, Yiping; Liu, Zejia; Tang, Liqun; Dong, Shoubin

    2015-06-01

    A sub-pixel digital image correlation (DIC) method with a path-independent displacement tracking strategy has been implemented on NVIDIA compute unified device architecture (CUDA) for graphics processing unit (GPU) devices. Powered by parallel computing technology, this parallel DIC (paDIC) method, combining an inverse compositional Gauss-Newton (IC-GN) algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU. In the experiments using simulated and real speckle images, the paDIC reaches a computation speed of 1.66×105 POI/s (points of interest per second) and 1.13×105 POI/s respectively, 57-76 times faster than its sequential counterpart, without the sacrifice of accuracy and precision. To the best of our knowledge, it is the fastest computation speed of a sub-pixel DIC method reported heretofore.

  8. Unraveling the Fundamental Mechanisms of Solvent-Additive-Induced Optimization of Power Conversion Efficiencies in Organic Photovoltaic Devices.

    PubMed

    Herath, Nuradhika; Das, Sanjib; Zhu, Jiahua; Kumar, Rajeev; Chen, Jihua; Xiao, Kai; Gu, Gong; Browning, James F; Sumpter, Bobby G; Ivanov, Ilia N; Lauter, Valeria

    2016-08-10

    The realization of controllable morphologies of bulk heterojunctions (BHJ) in organic photovoltaics (OPVs) is one of the key factors enabling high-efficiency devices. We provide new insights into the fundamental mechanisms essential for the optimization of power conversion efficiencies (PCEs) with additive processing to PBDTTT-CF:PC71BM system. We have studied the underlying mechanisms by monitoring the 3D nanostructural modifications in BHJs and correlated the modifications with the optical analysis and theoretical modeling of charge transport. Our results demonstrate profound effects of diiodooctane (DIO) on morphology and charge transport in the active layers. For small amounts of DIO (<3 vol %), DIO promotes the formation of a well-mixed donor-acceptor compact film and augments charge transfer and PCE. In contrast, for large amounts of DIO (>3 vol %), DIO facilitates a loosely packed mixed morphology with large clusters of PC71BM, leading to deterioration in PCE. Theoretical modeling of charge transport reveals that DIO increases the mobility of electrons and holes (the charge carriers) by affecting the energetic disorder and electric field dependence of the mobility. Our findings show the implications of phase separation and carrier transport pathways to achieve optimal device performances. PMID:27403964

  9. Influences of Bi 2O 3 additive on the microstructure, permeability, and power loss characteristics of Ni-Zn ferrites

    NASA Astrophysics Data System (ADS)

    Su, Hua; Tang, Xiaoli; Zhang, Huaiwu; Jia, Lijun; Zhong, Zhiyong

    2009-10-01

    Nickel-zinc ferrite materials containing different Bi 2O 3 concentrations have been prepared by the conventional ceramic technique. Micrographs have clearly revealed that the Bi 2O 3 additive promoted grain growth. When the Bi 2O 3 content reached 0.15 wt%, a dual microstructure with both small grains (<5 μm) and some extremely large grains (>50 μm) appeared. With higher Bi 2O 3 content, the samples exhibited a very large average grain size of more than 30 μm. The initial permeability gradually decreased with increasing Bi 2O 3 content. When the Bi 2O 3 content exceeded 0.15 wt%, the permeability gradually decreased with frequency due to the low-frequency resonance induced by the large grain size. Neither the sintering density nor the saturation magnetization was obviously influenced by the Bi 2O 3 content or microstructure of the samples. However, power loss (Pcv) characteristics were evidently influenced. At low flux density, the sample with 0.10 wt% Bi 2O 3, which was characterized by an average grain size of 3-4 μm and few closed pores, displayed the lowest Pcv, irrespective of frequency. When the flux density was equal to or greater than the critical value of 40 mT, the sample with 0.20 wt% Bi 2O 3, which had the largest average grain size, displayed the lowest Pcv.

  10. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Wen

    2016-06-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using two-tier items were conducted on Grade 4 ( n = 90) and Grade 5 ( n = 86) students, respectively. One-way ANCOVA was conducted to investigate whether the different assessment formats affected these students' posttest scores on both the phenomenon and reason tiers, and confidence rating for an answer was assessed to diagnose the nature of students' responses (i.e., scientific answer, guessing, alternative conceptions, or knowledge deficiency). Follow-up interview was adopted to explore whether and how the various CBA representations influenced both graders' responses. Results showed that the CBA, in particular the dynamic representation format, allowed students who lacked prior knowledge (Grade 4) to easily understand the question stems. The various CBA representations also potentially encouraged students who already had learning experience (Grade 5) to enhance the metacognitive judgment of their responses. Therefore, CBA could reduce students' use of test-taking strategies and provide better diagnostic power for a two-tier instrument than the traditional paper-based version.

  11. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  12. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Wen

    2016-02-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using two-tier items were conducted on Grade 4 (n = 90) and Grade 5 (n = 86) students, respectively. One-way ANCOVA was conducted to investigate whether the different assessment formats affected these students' posttest scores on both the phenomenon and reason tiers, and confidence rating for an answer was assessed to diagnose the nature of students' responses (i.e., scientific answer, guessing, alternative conceptions, or knowledge deficiency). Follow-up interview was adopted to explore whether and how the various CBA representations influenced both graders' responses. Results showed that the CBA, in particular the dynamic representation format, allowed students who lacked prior knowledge (Grade 4) to easily understand the question stems. The various CBA representations also potentially encouraged students who already had learning experience (Grade 5) to enhance the metacognitive judgment of their responses. Therefore, CBA could reduce students' use of test-taking strategies and provide better diagnostic power for a two-tier instrument than the traditional paper-based version.

  13. Sonochemical degradation of Coomassie Brilliant Blue: effect of frequency, power density, pH and various additives.

    PubMed

    Rayaroth, Manoj P; Aravind, Usha K; Aravindakumar, Charuvila T

    2015-01-01

    Coomassie Brilliant Blue (CBB), discharged mainly from textile industries, is an identified water pollutant. Ultrasound initiated degradation of organic pollutants is one among the promising techniques and forms part of the Advanced Oxidation Processes (AOPs). Ultrasonic degradation of CBB under different experimental conditions has been investigated in the present work. The effect of frequency (200 kHz, 350 kHz, 620 kHz and 1 MHz) and power density (3.5 W mL(-1), 9.8 W mL(-1) and 19.6 W mL(-1)) on the degradation profile was evaluated. The optimum performance was obtained at 350 kHz and 19.6 W mL(-1). Similar to other sonolytic degradation of organic pollutants, maximum degradation of CBB was observed under acidic pH. The degradation profile indicated a pseudo-first order kinetics. The addition of ferrous ion (1×10(-4) M), hydrogen peroxide (1×10(-4) M), and peroxodisulphate (1×10(-4) M) had a positive effect on the degradation efficiency. The influence of certain important NOM like SDS and humic acid on the sonolytic degradation of CBB was also investigated. Both the compounds suppress the degradation efficiency. LC-Q-TOF-MS was used to identify the stable intermediate products. Nearly 13 transformed products were identified during 10min of sonication using the optimized operational parameters. This product profile demonstrated that most of the products are formed mainly by the OH radical attack. On the basis of these results, a degradation mechanism is proposed. PMID:25222624

  14. Power-law defect energy in a single-crystal gradient plasticity framework: a computational study

    NASA Astrophysics Data System (ADS)

    Bayerschen, E.; Böhlke, T.

    2016-03-01

    A single-crystal gradient plasticity model is presented that includes a power-law type defect energy depending on the gradient of an equivalent plastic strain. Numerical regularization for the case of vanishing gradients is employed in the finite element discretization of the theory. Three exemplary choices of the defect energy exponent are compared in finite element simulations of elastic-plastic tricrystals under tensile loading. The influence of the power-law exponent is discussed related to the distribution of gradients and in regard to size effects. In addition, an analytical solution is presented for the single slip case supporting the numerical results. The influence of the power-law exponent is contrasted to the influence of the normalization constant.

  15. Power-law defect energy in a single-crystal gradient plasticity framework: a computational study

    NASA Astrophysics Data System (ADS)

    Bayerschen, E.; Böhlke, T.

    2016-07-01

    A single-crystal gradient plasticity model is presented that includes a power-law type defect energy depending on the gradient of an equivalent plastic strain. Numerical regularization for the case of vanishing gradients is employed in the finite element discretization of the theory. Three exemplary choices of the defect energy exponent are compared in finite element simulations of elastic-plastic tricrystals under tensile loading. The influence of the power-law exponent is discussed related to the distribution of gradients and in regard to size effects. In addition, an analytical solution is presented for the single slip case supporting the numerical results. The influence of the power-law exponent is contrasted to the influence of the normalization constant.

  16. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  17. Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Dress, D. A.

    1985-01-01

    A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.

  18. Computer modeling of a regenerative solar-assisted Rankine power cycle

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1977-01-01

    A detailed interpretation of the computer program that describes the performance of one of these cycles; namely, a regenerative Rankine power cycle is presented. Water is used as the working medium throughout the cycle. The solar energy collected at relatively low temperature level presents 75 to 80% of the total heat demand and provides mainly the latent heat of vaporization. Another energy source at high temperature level superheats the steam and supplements the solar energy share. A program summary and a numerical example showing the sequency of computations are included. The outcome from the model comprises line temperatures, component heat rates, specific steam consumption, percentage of solar energy contribution, and the overall thermal efficiency.

  19. Computer simulations of low noise states in a high-power crossed-field amplifier

    SciTech Connect

    Chernin, D.P.

    1996-11-01

    A large body of experimental data has been accumulated over the past 15 years or so on the remarkable ability of both magnetrons and CFA`s to operate under certain conditions at noise levels comparable to those achieved in linear beam tubes. The physical origins of these low noise states have been the subjects of considerable speculation, fueled at least in part by results from computer simulation. While computer models have long been able to predict basic operating parameters like gain, efficiency, and peak power dissipation on electrode surfaces with reasonable accuracy, it is only within the past few years that any success could be reported on the simulation of noise. SAIC`s MASK code, a 2{1/2}-D particle-in-cell code, has been able to compute total, integrated noise power to an accuracy of {+-} a few dB in a high-power CFA, operating with a typical intra-pulse spectral noise density of {approximately}47--50 dB/MHz. Under conditions that produced low noise ({approximately}60--100 dB/MHz) in laboratory experiments, the MASK code has been, until now, unable to reproduce similar results. The present paper reports the first successful production of a very low noise state in a CFA simulation using the MASK code. The onset of this low noise state is quite sudden, appearing abruptly as the current is raised to a point near which the cathode operates as nearly emission limited. This behavior is similar to that seen in an experimentally observed transition between low noise and high noise operation in the SFD-266, a Varian[CPI] low noise CFA. Some comments are made concerning the nature of the noise as observed in the simulation and in the laboratory.

  20. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018

  1. Assessment of computer codes for VVER-440/213-type nuclear power plants

    SciTech Connect

    Szabados, L.; Ezsol, Gy.; Perneczky

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  2. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  3. Computation of the power spectrum in chaotic ¼λφ{sup 4} inflation

    SciTech Connect

    Rojas, Clara; Villalba, Víctor M. E-mail: Victor.Villalba@monash.edu

    2012-01-01

    The phase-integral approximation devised by Fröman and Fröman, is used for computing cosmological perturbations in the quartic chaotic inflationary model. The phase-integral formulas for the scalar power spectrum are explicitly obtained up to fifth order of the phase-integral approximation. As in previous reports (Rojas 2007b, 2007c and 2009), we point out that the accuracy of the phase-integral approximation compares favorably with the numerical results and those obtained using the slow-roll and uniform approximation methods.

  4. Computer Aided Design of Depressed Collectors for High Power Electron Tubes

    NASA Astrophysics Data System (ADS)

    Singh, A.; Valfells, A.; Kolander, M.; Granatstein, V. L.

    2003-12-01

    We present an overview of techniques and computer codes developed by us for systematic design of depressed collectors with special reference to devices that use gyrating electron beams. These techniques facilitate achievement of high power levels in electron tubes. ProfilEM is an aid to controlling the trajectories of primary electrons. BSCAT provides for tracing the trajectories of backscattered electrons. Multiple generations of backscatter can be obtained, while keeping the number of rays to be tracked within manageable limits. We describe examples of applying these codes to the case of two-stage depressed collectors for a 1.5 MW 110 GHz gyrotron.

  5. Sensor system and powerful computer system for controlling a microrobot-based micromanipulation station

    NASA Astrophysics Data System (ADS)

    Fischer, Thomas; Santa, Karoly; Fatikow, Sergej

    1997-09-01

    Mobile microrobots, which are capable of performing microscopic motions, have become a subject of great interest all over the world. They have the potential to be used for a variety of applications: in industry for assembly of microsystems or for the testing of silicon chips; in medicine for handling biological cells, etc. A new model of an automated micromanipulation station, which includes piezoelectric microrobots is now being built by an interdisciplinary research group at the University of Karlsruhe, Germany. This paper describes a sensor system and a powerful tailorable computer for controlling the micromanipulation station.

  6. Computer program for thermodynamic analysis of open cycle multishaft power system with multiple reheat and intercool

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1974-01-01

    A computer program to analyze power systems having any number of shafts up to a maximum of five is presented. On each shaft there can be as many as five compressors and five turbines, along with any specified number of intervening intercoolers and reheaters. A recuperator can be included. Turbine coolant flow can be accounted for. Any fuel consisting entirely of hydrogen and/or carbon can be used. The program is valid for maximum temperatures up to about 2000 K (3600 R). The system description, the analysis method, a detailed explanation of program input and output including an illustrative example, a dictionary of program variables, and the program listing are explained.

  7. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.

    PubMed

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-01-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators. PMID:27457034

  8. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    PubMed Central

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-01-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10−13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators. PMID:27457034

  9. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10‑13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  10. Direct Methanol Fuel Cell Power Supply For All-Day True Wireless Mobile Computing

    SciTech Connect

    Brian Wells

    2008-11-30

    PolyFuel has developed state-of-the-art portable fuel cell technology for the portable computing market. A novel approach to passive water recycling within the MEA has led to significant system simplification and size reduction. Miniature stack technology with very high area utilization and minimalist seals has been developed. A highly integrated balance of plant with very low parasitic losses has been constructed around the new stack design. Demonstration prototype systems integrated with laptop computers have been shown in recent months to leading OEM computer manufacturers. PolyFuel intends to provide this technology to its customers as a reference design as a means of accelerating the commercialization of portable fuel cell technology. The primary goal of the project was to match the energy density of a commercial lithium ion battery for laptop computers. PolyFuel made large strides against this goal and has now demonstrated 270 Wh/liter compared with lithium ion energy densities of 300 Wh/liter. Further, more incremental, improvements in energy density are envisioned with an additional 20-30% gains possible in each of the next two years given further research and development.

  11. Improved operating scenarios of the DIII-D tokamak as a result of the addition of UNIX computer systems

    SciTech Connect

    Henline, P.A.

    1995-10-01

    The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape control due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.

  12. Stochastic optimal control methods for investigating the power of morphological computation.

    PubMed

    Rückert, Elmar A; Neumann, Gerhard

    2013-01-01

    One key idea behind morphological computation is that many difficulties of a control problem can be absorbed by the morphology of a robot. The performance of the controlled system naturally depends on the control architecture and on the morphology of the robot. Because of this strong coupling, most of the impressive applications in morphological computation typically apply minimalistic control architectures. Ideally, adapting the morphology of the plant and optimizing the control law interact so that finally, optimal physical properties of the system and optimal control laws emerge. As a first step toward this vision, we apply optimal control methods for investigating the power of morphological computation. We use a probabilistic optimal control method to acquire control laws, given the current morphology. We show that by changing the morphology of our robot, control problems can be simplified, resulting in optimal controllers with reduced complexity and higher performance. This concept is evaluated on a compliant four-link model of a humanoid robot, which has to keep balance in the presence of external pushes. PMID:23186345

  13. Computation and Experiment: A Powerful Combination to Understand and Predict Reactivities.

    PubMed

    Sperger, Theresa; Sanhueza, Italo A; Schoenebeck, Franziska

    2016-06-21

    Computational chemistry has become an established tool for the study of the origins of chemical phenomena and examination of molecular properties. Because of major advances in theory, hardware and software, calculations of molecular processes can nowadays be done with reasonable accuracy on a time-scale that is competitive or even faster than experiments. This overview will highlight broad applications of computational chemistry in the study of organic and organometallic reactivities, including catalytic (NHC-, Cu-, Pd-, Ni-catalyzed) and noncatalytic examples of relevance to organic synthesis. The selected examples showcase the ability of computational chemistry to rationalize and also predict reactivities of broad significance. A particular emphasis is placed on the synergistic interplay of computations and experiments. It is discussed how this approach allows one to (i) gain greater insight than the isolated techniques, (ii) inspire novel chemistry avenues, and (iii) assist in reaction development. Examples of successful rationalizations of reactivities are discussed, including the elucidation of mechanistic features (radical versus polar) and origins of stereoselectivity in NHC-catalyzed reactions as well as the rationalization of ligand effects on ligation states and selectivity in Pd- and Ni-catalyzed transformations. Beyond explaining, the synergistic interplay of computation and experiments is then discussed, showcasing the identification of the likely catalytically active species as a function of ligand, additive, and solvent in Pd-catalyzed cross-coupling reactions. These may vary between mono- or bisphosphine-bound or even anionic Pd complexes in polar media in the presence of coordinating additives. These fundamental studies also inspired avenues in catalysis via dinuclear Pd(I) cycles. Detailed mechanistic studies supporting the direct reactivity of Pd(I)-Pd(I) with aryl halides as well as applications of air-stable dinuclear Pd(I) catalysts are

  14. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    SciTech Connect

    Chika Nwankpa

    2011-01-31

    This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

  15. (Advanced materials, robotics, and advanced computers for use in nuclear power plants)

    SciTech Connect

    White, J.D.

    1989-11-17

    The aim of the IAEA Technical Committee Workshop was to provide an opportunity to exchange information on the status of advances in technologies such as improved materials, robotics, and advanced computers already used or expected to be used in the design of nuclear power plants, and to review possible applications of advanced technologies in future reactor designs. Papers were given in these areas by Belgium, France, Mexico, Canada, Russia, India, and the United States. Notably absent from this meeting were Japan, Germany, Italy, Spain, the United Kingdom, and the Scandinavian countries -- all of whom are working in the areas of interest to this meeting. Most of the workshop discussion, however, was focused on advanced controls (including human-machine interface and software development and testing) and electronic descriptions of power plants. Verification and validation of design was also a topic of considerable discussion. The traveler was surprised at the progress made in 3-D electronic images of nuclear power plants and automatic updating of these images to reflect as-built conditions. Canadian plants and one Mexican plant have used photogrammetry to update electronic drawings automatically. The Canadians also have started attaching other electronic data bases to the electronic drawings. These data bases include parts information and maintenance work. The traveler observed that the Advanced Controls Program is better balanced and more forward looking than other nuclear controls R D activities described. The French participants made this observation in the meeting and expressed interest in collaborative work in this area.

  16. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  17. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  18. The clinical significance and management of patients with incomplete coronary angiography and the value of additional computed tomography coronary angiography.

    PubMed

    Pregowski, Jerzy; Kepka, Cezary; Kruk, Mariusz; Mintz, Gary S; Kalinczuk, Lukasz; Ciszewski, Michal; Kochanowski, Lukasz; Wolny, Rafal; Chmielak, Zbigniew; Jastrzębski, Jan; Klopotowski, Mariusz; Zalewska, Joanna; Demkow, Marcin; Karcz, Maciej; Witkowski, Adam

    2014-04-01

    To assess the anatomical background and significance of incomplete invasive coronary angiography (ICA) and to evaluate the value of coronary computed tomography angiography (CTA) in this scenario. The current study is an analysis of high volume center experience with prospective registry of coronary CTA and ICA. The target population was identified through a review of the electronic database. We included consecutive patients referred for coronary CTA after ICA, which did not visualize at least one native coronary artery or by-pass graft. Between January 2009 and April 2013, 13,603 diagnostic ICA were performed. There were 45 (0.3 %) patients referred for coronary CTA after incomplete ICA. Patients were divided into 3 groups: angina symptoms without previous coronary artery by-pass grafting (CABG) (n = 11,212), angina symptoms with previous CABG (n = 986), and patients prior to valvular surgery (n = 925). ICA did not identify by-pass grafts in 21 (2.2 %) patients and in 24 (0.2 %) cases of native arteries. The explanations for an incomplete ICA included: 11 ostium anomalies, 2 left main spasms, 5 access site problems, 5 ascending aorta aneurysms, and 2 tortuous take-off of a subclavian artery. However, in 20 (44 %) patients no specific reason for the incomplete ICA was identified. After coronary CTA revascularization was performed in 11 (24 %) patients: 6 successful repeat ICA and percutaneous intervention and 5 CABG. Incomplete ICA constitutes rare, but a significant clinical problem. Coronary CTA provides adequate clinical information in these patients. PMID:24623270

  19. RESIDUAL OXIDANTS REMOVAL FROM COASTAL POWER PLANT COOLING SYSTEM DISCHARGES: FIELD EVALUATION OF SO2 ADDITION SYSTEM

    EPA Science Inventory

    The report gives results of an evaluation of the performance of a dechlorination system that uses SO2 to remove residual oxidants from chlorinated sea water in a power plant cooling system. Samples of unchlorinated, chlorinated, and dechlorinated cooling water were obtained at Pa...

  20. StringFast: Fast Code to Compute CMB Power Spectra induced by Cosmic Strings

    NASA Astrophysics Data System (ADS)

    Foreman, Simon; Moss, Adam; Scott, Douglas

    2011-06-01

    StringFast implements a method for efficient computation of the C_l spectra induced by a network of strings, which is fast enough to be used in Markov Chain Monte Carlo analyses of future data. This code allows the user to calculate TT, EE, and BB power spectra (scalar [for TT and EE], vector, and tensor modes) for "wiggly" cosmic strings. StringFast uses the output of the public code CMBACT. The properties of the strings are described by four parameters: Gμ: dimensionless string tensionv: rms transverse velocity (as fraction of c)α: "wiggliness"ξ: comoving correlation length of the string network It is written as a Fortran 90 module.

  1. Laser bar code applied in computer aided design of power fittings

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohong; Yang, Fan

    2010-10-01

    A computer aided process planning system is developed based on laser bar code technology to automatize and standardize processing-paper making. The system sorts fittings by analyzing their types, structures, dimensions, materials, and technics characteristics, groups and encodes the fittings with similar technology characteristics base on the theory of Group Technology (GT). The system produces standard technology procedures using integrative-parts method and stores them into technics databases. To work out the technology procedure of fittings, the only thing for users need to do is to scan the bar code of fittings with a laser code reader. The system can produce process-paper using decision trees method and then print the process-cards automatically. The software has already been applied in some power stations and is praised by the users.

  2. Computational analysis of the curvature distribution and power losses of metal strip in tension levellers

    NASA Astrophysics Data System (ADS)

    Steinwender, L.; Kainz, A.; Krimpelstätter, K.; Zeman, K.

    2010-06-01

    Tension levelling is employed in strip processing lines to minimise residual stresses resp. to improve the strip flatness by inducing small elasto-plastic deformations. To improve the design of such machines, precise calculation models are essential to reliably predict tension losses due to plastic dissipation, power requirements of the driven bridle rolls (located upstream and downstream), reaction forces on levelling rolls as well as strains and stresses in the strip. FEM (Finite Element Method) simulations of the tension levelling process (based on Updated Lagrangian concepts) yield high computational costs due to the necessity of very fine meshes as well as due to the severely non-linear characteristics of contact, material and geometry. In an evaluation process of hierarchical models (models with different modeling levels), the reliability of both 3D and 2D modelling concepts (based on continuum and structural elements) was proved by extensive analyses as well as consistency checks against measurement data from an industrial tension leveller. To exploit the potential of computational cost savings, a customised modelling approach based on the principle of virtual work has been elaborated, which yields a drastic reduction of degrees of freedom compared to simulations by utilising commercial FEM-packages.

  3. Computation of inflationary cosmological perturbations in the power-law inflationary model using the phase-integral method

    SciTech Connect

    Rojas, Clara; Villalba, Victor M.

    2007-03-15

    The phase-integral approximation devised by Froeman and Froeman, is used for computing cosmological perturbations in the power-law inflationary model. The phase-integral formulas for the scalar and tensor power spectra are explicitly obtained up to ninth-order of the phase-integral approximation. We show that, the phase-integral approximation exactly reproduces the shape of the power spectra for scalar and tensor perturbations as well as the spectral indices. We compare the accuracy of the phase-integral approximation with the results for the power spectrum obtained with the slow-roll and uniform-approximation methods.

  4. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    SciTech Connect

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George; Holcomb, Matthew

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulation improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.

  5. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  6. SAMPSON Parallel Computation for Sensitivity Analysis of TEPCO's Fukushima Daiichi Nuclear Power Plant Accident

    NASA Astrophysics Data System (ADS)

    Pellegrini, M.; Bautista Gomez, L.; Maruyama, N.; Naitoh, M.; Matsuoka, S.; Cappello, F.

    2014-06-01

    On March 11th 2011 a high magnitude earthquake and consequent tsunami struck the east coast of Japan, resulting in a nuclear accident unprecedented in time and extents. After scram started at all power stations affected by the earthquake, diesel generators began operation as designed until tsunami waves reached the power plants located on the east coast. This had a catastrophic impact on the availability of plant safety systems at TEPCO's Fukushima Daiichi, leading to the condition of station black-out from unit 1 to 3. In this article the accident scenario is studied with the SAMPSON code. SAMPSON is a severe accident computer code composed of hierarchical modules to account for the diverse physics involved in the various phases of the accident evolution. A preliminary parallelization analysis of the code was performed using state-of-the-art tools and we demonstrate how this work can be beneficial to the nuclear safety analysis. This paper shows that inter-module parallelization can reduce the time to solution by more than 20%. Furthermore, the parallel code was applied to a sensitivity study for the alternative water injection into TEPCO's Fukushima Daiichi unit 3. Results show that the core melting progression is extremely sensitive to the amount and timing of water injection, resulting in a high probability of partial core melting for unit 3.

  7. Optimization of Acetylene Black Conductive Additive andPolyvinylidene Difluoride Composition for High Power RechargeableLithium-Ion Cells

    SciTech Connect

    Liu, G.; Zheng, H.; Battaglia, V.S.; Simens, A.S.; Minor, A.M.; Song, X.

    2007-07-01

    Fundamental electrochemical methods were applied to study the effect of the acetylene black (AB) and the polyvinylidene difluoride (PVDF) polymer binder on the performance of high-power designed rechargeable lithium ion cells. A systematic study of the AB/PVDF long-range electronic conductivity at different weight ratios is performed using four-probe direct current tests and the results reported. There is a wide range of AB/PVDF ratios that satisfy the long-range electronic conductivity requirement of the lithium-ion cathode electrode; however, a significant cell power performance improvement is observed at small AB/PVDF composition ratios that are far from the long-range conductivity optimum of 1 to 1.25. Electrochemical impedance spectroscopy (EIS) tests indicate that the interfacial impedance decreases significantly with increase in binder content. The hybrid power pulse characterization results agree with the EIS tests and also show improvement for cells with a high PVDF content. The AB to PVDF composition plays a significant role in the interfacial resistance. We believe the higher binder contents lead to a more cohesive conductive carbon particle network that results in better overall all local electronic conductivity on the active material surface and hence reduced charge transfer impedance.

  8. Microstructure and properties of the low-power-laser clad coatings on magnesium alloy with different amount of rare earth addition

    NASA Astrophysics Data System (ADS)

    Zhu, Rundong; Li, Zhiyong; Li, Xiaoxi; Sun, Qi

    2015-10-01

    Due to the low-melting-point and high evaporation rate of magnesium at elevated temperature, high power laser clad coating on magnesium always causes subsidence and deterioration in the surface. Low power laser can reduce the evaporation effect while brings problems such as decreased thickness, incomplete fusion and unsatisfied performance. Therefore, low power laser with selected parameters was used in our research work to obtain Al-Cu coatings with Y2O3 addition on AZ91D magnesium alloy. The addition of Y2O3 obviously increases thickness of the coating and improves the melting efficiency. Furthermore, the effect of Y2O3 addition on the microstructure of laser clad Al-Cu coatings was investigated by scanning electron microscopy. The energy-dispersive spectrometer (EDS) and X-ray diffractometer (XRD) were used to examine the elemental and phase compositions of the coatings. The properties were investigated by micro-hardness test, dry wear test and electrochemical corrosion. It was found that the addition of Y2O3 refined the microstructure. The micro-hardness, abrasion resistance and corrosion resistance of the coatings was greatly improved compared with the magnesium matrix, especially for the Al-Cu coating with Y2O3 addition.

  9. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  10. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  11. A method of obtaining signal components of residual carrier signal with their power content and computer simulation

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1993-01-01

    A novel algorithm to obtain all signal components of a residual carrier signal with any number of channels is presented. The phase modulation type may be NRZ-L or split phase (Manchester). The algorithm also provides a simple way to obtain the power contents of the signal components. Steps to recognize the signal components that influence the carrier tracking loop and the data tracking loop at the receiver are given. A computer program for numerical computation is also provided.

  12. The role of additional computed tomography in the decision-making process on the secondary prevention in patients after systemic cerebral thrombolysis

    PubMed Central

    Sobolewski, Piotr; Kozera, Grzegorz; Szczuchniak, Wiktor; Nyka, Walenty M

    2016-01-01

    Introduction Patients with ischemic stroke undergoing intravenous (iv)-thrombolysis are routinely controlled with computed tomography on the second day to assess stroke evolution and hemorrhagic transformation (HT). However, the benefits of an additional computed tomography (aCT) performed over the next days after iv-thrombolysis have not been determined. Methods We retrospectively screened 287 Caucasian patients with ischemic stroke who were consecutively treated with iv-thrombolysis from 2008 to 2012. The results of computed tomography performed on the second (control computed tomography) and seventh (aCT) day after iv-thrombolysis were compared in 274 patients (95.5%); 13 subjects (4.5%), who died before the seventh day from admission were excluded from the analysis. Results aCTs revealed a higher incidence of HT than control computed tomographies (14.2% vs 6.6%; P=0.003). Patients with HT in aCT showed higher median of National Institutes of Health Stroke Scale score on admission than those without HT (13.0 vs 10.0; P=0.01) and higher presence of ischemic changes >1/3 middle cerebral artery territory (66.7% vs 35.2%; P<0.01). Correlations between presence of HT in aCT and National Institutes of Health Stroke Scale score on admission (rpbi 0.15; P<0.01), and the ischemic changes >1/3 middle cerebral artery (phi=0.03) existed, and the presence of HT in aCT was associated with 3-month mortality (phi=0.03). Conclusion aCT after iv-thrombolysis enables higher detection of HT, which is related to higher 3-month mortality. Thus, patients with severe middle cerebral artery infarction may benefit from aCT in the decision-making process on the secondary prophylaxis. PMID:26730196

  13. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  14. Dorsal Column Steerability with Dual Parallel Leads using Dedicated Power Sources: A Computational Model

    PubMed Central

    Lee, Dongchul; Gillespie, Ewan; Bradley, Kerry

    2011-01-01

    In spinal cord stimulation (SCS), concordance of stimulation-induced paresthesia over painful body regions is a necessary condition for therapeutic efficacy. Since patient pain patterns can be unique, a common stimulation configuration is the placement of two leads in parallel in the dorsal epidural space. This construct provides flexibility in steering stimulation current mediolaterally over the dorsal column to achieve better pain-paresthesia overlap. Using a mathematical model with an accurate fiber diameter distribution, we studied the ability of dual parallel leads to steer stimulation between adjacent contacts on dual parallel leads using (1) a single source system, and (2) a multi-source system, with a dedicated current source for each contact. The volume conductor model of a low-thoracic spinal cord with epidurally-positioned dual parallel (2 mm separation) percutaneous leads was first created, and the electric field was calculated using ANSYS, a finite element modeling tool. The activating function for 10 um fibers was computed as the second difference of the extracellular potential along the nodes of Ranvier on the nerve fibers in the dorsal column. The volume of activation (VOA) and the central point of the VOA were computed using a predetermined threshold of the activating function. The model compared the field steering results with single source versus dedicated power source systems on dual 8-contact stimulation leads. The model predicted that the multi-source system can target more central points of stimulation on the dorsal column than a single source system (100 vs. 3) and the mean steering step for mediolateral steering is 0.02 mm for multi-source systems vs 1 mm for single source systems, a 50-fold improvement. The ability to center stimulation regions in the dorsal column with high resolution may allow for better optimization of paresthesia-pain overlap in patients. PMID:21339729

  15. A computer package for optimal multi-objective VAR planning in large scale power systems

    SciTech Connect

    Chiang, H.D. . School of Electrical Engineering); Liu, C.C.; Chen, Y.L. . Dept. of Electrical Engineering); Hsiao, Y.T.

    1994-05-01

    This paper presents a simulated annealing based computer package for multi-objective, VAR planning in large scale power systems - SAMVAR. This computer package has three distinct features. First, the optimal VAR planning is reformulated as a constrained, multi-objective, non-differentiable optimization problem. The new formulation considers four different objective functions related to system investment, system operational efficiency, system security and system service quality. The new formulation also takes into consideration load, operation and contingency constraints. Second, it allows both the objective functions and equality and inequality constraints to be non-differentiable; making the problem formulation more realistic. Third, the package employs a two-stage solution algorithm based on an extended simulated annealing technique and the [var epsilon]-constraint method. The first-stage of the solution algorithm uses an extended simulated annealing technique to find a global, non-inferior solution. The results obtained from the first stage provide a basis for planners to prioritize the objective functions such that a primary objective function is chosen and tradeoff tolerances for the other objective functions are set. The primary objective function and the trade-off tolerances are then used to transform the constrained multi-objective optimization problem into a single-objective optimization problem with more constraints by employing the [var epsilon]-constraint method. The second-stage uses the simulated annealing technique to find the global optimal solution. A salient feature of SAMVAR is that it allows planners to find an acceptable, global non-inferior solution for the VAR problem. Simulation results indicate that SAMVAR has the ability to handle the multi-objective VAR planning problem and meet with the planner's requirements.

  16. Additivity of factor effects in reading tasks is still a challenge for computational models: Reply to Ziegler, Perry, and Zorzi (2009).

    PubMed

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly intermixed. This work represents an important attempt given that computational models of reading processes have to date largely ignored the issue of whether it is possible to simulate additive effects. Despite CDP+'s success at capturing many other phenomena, it is clear that CDP+ fails to capture the full pattern seen with skilled readers in these experiments. PMID:19210105

  17. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  18. Digital computer study of nuclear reactor thermal transients during startup of 60-kWe Brayton power conversion system

    NASA Technical Reports Server (NTRS)

    Jefferies, K. S.; Tew, R. C.

    1974-01-01

    A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.

  19. Technical basis for environmental qualification of computer-based safety systems in nuclear power plants

    SciTech Connect

    Korsah, K.; Wood, R.T.; Tanaka, T.J.; Antonescu, C.E.

    1997-10-01

    This paper summarizes the results of research sponsored by the US Nuclear Regulatory Commission (NRC) to provide the technical basis for environmental qualification of computer-based safety equipment in nuclear power plants. This research was conducted by the Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL). ORNL investigated potential failure modes and vulnerabilities of microprocessor-based technologies to environmental stressors, including electromagnetic/radio-frequency interference, temperature, humidity, and smoke exposure. An experimental digital safety channel (EDSC) was constructed for the tests. SNL performed smoke exposure tests on digital components and circuit boards to determine failure mechanisms and the effect of different packaging techniques on smoke susceptibility. These studies are expected to provide recommendations for environmental qualification of digital safety systems by addressing the following: (1) adequacy of the present preferred test methods for qualification of digital I and C systems; (2) preferred standards; (3) recommended stressors to be included in the qualification process during type testing; (4) resolution of need for accelerated aging in qualification testing for equipment that is to be located in mild environments; and (5) determination of an appropriate approach to address smoke in a qualification program.

  20. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  1. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  2. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ...The U.S. Nuclear Regulatory Commission (NRC or the Commission) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed Revision 1 of RG 1.171, dated September 1997. This revision endorses, with clarifications, the enhanced consensus practices for testing......

  3. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ...The U.S. Nuclear Regulatory Commission (NRC or the Commission) is issuing for public comment draft regulatory guide (DG), DG-1207, ``Test Documentation for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1207 is proposed Revision 1 of RG 1.170, dated September 1997. This revision endorses, with clarifications, the enhanced consensus practices for test......

  4. IMES-Ural: the system of the computer programs for operational analysis of power flow distribution using telemetric data

    SciTech Connect

    Bogdanov, V.A.; Bol'shchikov, A.A.; Zifferman, E.O.

    1981-02-01

    A system of computer programs was described which enabled the user to perform real-time calculation and analysis of the current flow in the 500 kV network of the Ural Regional Electric Power Plant for all possible variations of the network, based on teleinformation and correctable equivalent parameters of the 220 to 110 kV network.

  5. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  6. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1984-01-01

    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  7. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

    SciTech Connect

    Jaffe, L. D.

    1984-02-15

    CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

  8. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

    SciTech Connect

    Oxstrand, Johanna; Bly, Aaron; LeBlanc, Katya

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  9. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster

    PubMed Central

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control. PMID:26053271

  10. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster.

    PubMed

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control. PMID:26053271

  11. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  12. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  13. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    SciTech Connect

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  14. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  15. Learning to modulate the partial powers of a single sEMG power spectrum through a novel human-computer interface.

    PubMed

    Skavhaug, Ida-Maria; Lyons, Kenneth R; Nemchuk, Anna; Muroff, Shira D; Joshi, Sanjay S

    2016-06-01

    New human-computer interfaces that use bioelectrical signals as input are allowing study of the flexibility of the human neuromuscular system. We have developed a myoelectric human-computer interface which enables users to navigate a cursor to targets through manipulations of partial powers within a single surface electromyography (sEMG) signal. Users obtain two-dimensional control through simultaneous adjustments of powers in two frequency bands within the sEMG spectrum, creating power profiles corresponding to cursor positions. It is unlikely that these types of bioelectrical manipulations are required during routine muscle contractions. Here, we formally establish the neuromuscular ability to voluntarily modulate single-site sEMG power profiles in a group of naïve subjects under restricted and controlled conditions using a wrist muscle. All subjects used the same pre-selected frequency bands for control and underwent the same training, allowing a description of the average learning progress throughout eight sessions. We show that subjects steadily increased target hit rates from 48% to 71% and exhibited greater control of the cursor's trajectories following practice. Our results point towards an adaptable neuromuscular skill, which may allow humans to utilize single muscle sites as limited general-purpose signal generators. Ultimately, the goal is to translate this neuromuscular ability to practical interfaces for the disabled by using a spared muscle to control external machines. PMID:26874751

  16. Impact of high microwave power on hydrogen impurity trapping in nanocrystalline diamond films grown with simultaneous nitrogen and oxygen addition into methane/hydrogen plasma

    NASA Astrophysics Data System (ADS)

    Tang, C. J.; Fernandes, A. J. S.; Jiang, X. F.; Pinto, J. L.; Ye, H.

    2016-01-01

    In this work, we study for the first time the influence of microwave power higher than 2.0 kW on bonded hydrogen impurity incorporation (form and content) in nanocrystalline diamond (NCD) films grown in a 5 kW MPCVD reactor. The NCD samples of different thickness ranging from 25 to 205 μm were obtained through a small amount of simultaneous nitrogen and oxygen addition into conventional about 4% methane in hydrogen reactants by keeping the other operating parameters in the same range as that typically used for the growth of large-grained polycrystalline diamond films. Specific hydrogen point defect in the NCD films is analyzed by using Fourier-transform infrared (FTIR) spectroscopy. When the other operating parameters are kept constant (mainly the input gases), with increasing of microwave power from 2.0 to 3.2 kW (the pressure was increased slightly in order to stabilize the plasma ball of the same size), which simultaneously resulting in the rise of substrate temperature more than 100 °C, the growth rate of the NCD films increases one order of magnitude from 0.3 to 3.0 μm/h, while the content of hydrogen impurity trapped in the NCD films during the growth process decreases with power. It has also been found that a new H related infrared absorption peak appears at 2834 cm-1 in the NCD films grown with a small amount of nitrogen and oxygen addition at power higher than 2.0 kW and increases with power higher than 3.0 kW. According to these new experimental results, the role of high microwave power on diamond growth and hydrogen impurity incorporation is discussed based on the standard growth mechanism of CVD diamonds using CH4/H2 gas mixtures. Our current experimental findings shed light into the incorporation mechanism of hydrogen impurity in NCD films grown with a small amount of nitrogen and oxygen addition into methane/hydrogen plasma.

  17. XOQDOQ: computer program for the meteorological evaluation of routine effluent releases at nuclear power stations. Final report

    SciTech Connect

    Sagendorf, J.F.; Goll, J.T.; Sandusky, W.F.

    1982-09-01

    Provided is a user's guide for the US Nuclear Regulatory Commission's (NRC) computer program X0QDOQ which implements Regulatory Guide 1.111. This NUREG supercedes NUREG-0324 which was published as a draft in September 1977. This program is used by the NRC meteorology staff in their independent meteorological evaluation of routine or anticipated intermittent releases at nuclear power stations. It operates in a batch input mode and has various options a user may select. Relative atmospheric dispersion and deposition factors are computed for 22 specific distances out to 50 miles from the site for each directional sector. From these results, values for 10 distance segments are computed. The user may also select other locations for which atmospheric dispersion deposition factors are computed. Program features, including required input data and output results, are described. A program listing and test case data input and resulting output are provided.

  18. Dataset of calcified plaque condition in the stenotic coronary artery lesion obtained using multidetector computed tomography to indicate the addition of rotational atherectomy during percutaneous coronary intervention.

    PubMed

    Akutsu, Yasushi; Hamazaki, Yuji; Sekimoto, Teruo; Kaneko, Kyouichi; Kodama, Yusuke; Li, Hui-Ling; Suyama, Jumpei; Gokan, Takehiko; Sakai, Koshiro; Kosaki, Ryota; Yokota, Hiroyuki; Tsujita, Hiroaki; Tsukamoto, Shigeto; Sakurai, Masayuki; Sambe, Takehiko; Oguchi, Katsuji; Uchida, Naoki; Kobayashi, Shinichi; Aoki, Atsushi; Kobayashi, Youichi

    2016-06-01

    Our data shows the regional coronary artery calcium scores (lesion CAC) on multidetector computed tomography (MDCT) and the cross-section imaging on MDCT angiography (CTA) in the target lesion of the patients with stable angina pectoris who were scheduled for percutaneous coronary intervention (PCI). CAC and CTA data were measured using a 128-slice scanner (Somatom Definition AS+; Siemens Medical Solutions, Forchheim, Germany) before PCI. CAC was measured in a non-contrast-enhanced scan and was quantified using the Calcium Score module of SYNAPSE VINCENT software (Fujifilm Co. Tokyo, Japan) and expressed in Agatston units. CTA were then continued with a contrast-enhanced ECG gating to measure the severity of the calcified plaque condition. We present that both CAC and CTA data are used as a benchmark to consider the addition of rotational atherectomy during PCI to severely calcified plaque lesions. PMID:26977441

  19. Impact of the flame retardant additive triphenyl phosphate (TPP) on the performance of graphite/LiFePO4 cells in high power applications

    NASA Astrophysics Data System (ADS)

    Ciosek Högström, Katarzyna; Lundgren, Henrik; Wilken, Susanne; Zavalis, Tommy G.; Behm, Mårten; Edström, Kristina; Jacobsson, Per; Johansson, Patrik; Lindbergh, Göran

    2014-06-01

    This study presents an extensive characterization of a standard Li-ion battery (LiB) electrolyte containing different concentrations of the flame retardant triphenyl phosphate (TPP) in the context of high power applications. Electrolyte characterization shows only a minor decrease in the electrolyte flammability for low TPP concentrations. The addition of TPP to the electrolyte leads to increased viscosity and decreased conductivity. The solvation of the lithium ion charge carriers seem to be directly affected by the TPP addition - as evidenced by Raman spectroscopy and increased mass-transport resistivity. Graphite/LiFePO4 full cell tests show the energy efficiency to decrease with the addition of TPP. Specifically, diffusion resistivity is observed to be the main source of increased losses. Furthermore, TPP influences the interface chemistry on both the positive and the negative electrode. Higher concentrations of TPP lead to thicker interface layers on LiFePO4. Even though TPP is not electrochemically reduced on graphite, it does participate in SEI formation. TPP cannot be considered a suitable flame retardant for high power applications as there is only a minor impact of TPP on the flammability of the electrolyte for low concentrations of TPP, and a significant increase in polarization is observed for higher concentrations of TPP.

  20. High SO{sub 2} removal efficiency testing: Results of DBA and sodium formate additive tests at Southwestern Electric Power company`s Pirkey Station

    SciTech Connect

    1996-05-30

    Tests were conducted at Southwestern Electric Power Company`s (SWEPCo) Henry W. Pirkey Station wet limestone flue gas desulfurization (FGD) system to evaluate options for achieving high sulfur dioxide removal efficiency. The Pirkey FGD system includes four absorber modules, each with dual slurry recirculation loops and with a perforated plate tray in the upper loop. The options tested involved the use of dibasic acid (DBA) or sodium formate as a performance additive. The effectiveness of other potential options was simulated with the Electric Power Research Institute`s (EPRI) FGD PRocess Integration and Simulation Model (FGDPRISM) after it was calibrated to the system. An economic analysis was done to determine the cost effectiveness of the high-efficiency options. Results are-summarized below.

  1. Experimental and computational mechanistic investigation of chlorocarbene additions to bridgehead carbene-anti-Bredt systems: noradamantylcarbene-adamantene and adamantylcarbene-homoadamantene.

    PubMed

    Hare, Stephanie R; Orman, Marina; Dewan, Faizunnahar; Dalchand, Elizabeth; Buzard, Camilla; Ahmed, Sadia; Tolentino, Julia C; Sethi, Ulweena; Terlizzi, Kelly; Houferak, Camille; Stein, Aliza M; Stedronsky, Alexandra; Thamattoor, Dasan M; Tantillo, Dean J; Merrer, Dina C

    2015-05-15

    Cophotolysis of noradamantyldiazirine with the phenanthride precursor of dichlorocarbene or phenylchlorodiazirine in pentane at room temperature produces noradamantylethylenes in 11% yield with slight diastereoselectivity. Cophotolysis of adamantyldiazirine with phenylchlorodiazirine in pentane at room temperature generates adamantylethylenes in 6% yield with no diastereoselectivity. (1)H NMR showed the reaction of noradamantyldiazirine + phenylchlorodiazirine to be independent of solvent, and the rate of noradamantyldiazirine consumption correlated with the rate of ethylene formation. Laser flash photolysis showed that reaction of phenylchlorocarbene + adamantene was independent of adamantene concentration. The reaction of phenylchlorocarbene + homoadamantene produces the ethylene products with k = 9.6 × 10(5) M(-1) s(-1). Calculations at the UB3LYP/6-31+G(d,p) and UM062X/6-31+G(d,p)//UB3LYP/6-31+G(d,p) levels show the formation of exocyclic ethylenes to proceed (a) on the singlet surface via stepwise addition of phenylchlorocarbene (PhCCl) to bridgehead alkenes adamantene and homoadamantene, respectively, producing an intermediate singlet diradical in each case, or (b) via addition of PhCCl to the diazo analogues of noradamantyl- and adamantyldiazirine. Preliminary direct dynamics calculations on adamantene + PhCCl show a high degree of recrossing (68%), indicative of a flat transition state surface. Overall, 9% of the total trajectories formed noradamantylethylene product, each proceeding via the computed singlet diradical. PMID:25902301

  2. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    SciTech Connect

    Mayhall, D J; Stein, W; Gronberg, J B

    2006-05-15

    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  3. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  4. Modeling molecular computing systems by an artificial chemistry - its expressive power and application.

    PubMed

    Tominaga, Kazuto; Watanabe, Tooru; Kobayashi, Keiji; Nakamura, Masaki; Kishi, Koji; Kazuno, Mitsuyoshi

    2007-01-01

    Artificial chemistries are mainly used to construct virtual systems that are expected to show behavior similar to living systems. In this study, we explore possibilities of applying an artificial chemistry to modeling natural biochemical systems-or, to be specific, molecular computing systems-and show that it may be a useful modeling tool for molecular computation. We previously proposed an artificial chemistry based on string pattern matching and recombination. This article first demonstrates that this artificial chemistry is computationally universal if it has only rules that have one reactant or two reactants. We think this is a good property of an artificial chemistry that models molecular computing, because natural elementary chemical reactions, on which molecular computing is based, are mostly unimolecular or bimolecular. Then we give two illustrative example models for DNA computing in our artificial chemistry: one is for the type of computation called the Adleman-Lipton paradigm, and the other is for a DNA implementation of a finite automaton. Through the construction of these models we observe preferred properties of the artificial chemistry for modeling molecular computing, such as having no spatial structure and being flexible in choosing levels of abstraction. PMID:17567243

  5. Manipulatives and the Computer: A Powerful Partnership for Learners of All Ages.

    ERIC Educational Resources Information Center

    Perl, Teri

    1990-01-01

    Discussed is the concept of mirroring in which computer programs are used to enhance the use of mathematics manipulatives. The strengths and weaknesses of this approach are presented. The uses of the computer in modeling and as a manipulative are also described. Several software packages are suggested. (CW)

  6. Education/Technology/Power: Educational Computing as a Social Practice. SUNY Series, Frontiers in Education.

    ERIC Educational Resources Information Center

    Bromley, Hank, Ed.; Apple, Michael W., Ed.

    This book is organized in three parts that address the following broad topics related to educational computing: discursive practices, i.e., who speaks of educational computing and how (chapters 1-4); classroom practices (chapters 5-6); and democratic possibilities, i.e., the constructive potential of the technology (chapters 7-9). Following an…

  7. Neuro-Fuzzy Computational Technique to Control Load Frequency in Hydro-Thermal Interconnected Power System

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Sinha, S. K.

    2015-09-01

    In this research work, two areas hydro-thermal power system connected through tie-lines is considered. The perturbation of frequencies at the areas and resulting tie line power flows arise due to unpredictable load variations that cause mismatch between the generated and demanded powers. Due to rising and falling power demand, the real and reactive power balance is harmed; hence frequency and voltage get deviated from nominal value. This necessitates designing of an accurate and fast controller to maintain the system parameters at nominal value. The main purpose of system generation control is to balance the system generation against the load and losses so that the desired frequency and power interchange between neighboring systems are maintained. The intelligent controllers like fuzzy logic, artificial neural network (ANN) and hybrid fuzzy neural network approaches are used for automatic generation control for the two area interconnected power systems. Area 1 consists of thermal reheat power plant whereas area 2 consists of hydro power plant with electric governor. Performance evaluation is carried out by using intelligent (ANFIS, ANN and fuzzy) control and conventional PI and PID control approaches. To enhance the performance of controller sliding surface i.e. variable structure control is included. The model of interconnected power system has been developed with all five types of said controllers and simulated using MATLAB/SIMULINK package. The performance of the intelligent controllers has been compared with the conventional PI and PID controllers for the interconnected power system. A comparison of ANFIS, ANN, Fuzzy and PI, PID based approaches shows the superiority of proposed ANFIS over ANN, fuzzy and PI, PID. Thus the hybrid fuzzy neural network controller has better dynamic response i.e., quick in operation, reduced error magnitude and minimized frequency transients.

  8. Computer-aided designing of automatic process control systems for thermal power stations

    NASA Astrophysics Data System (ADS)

    Trofimov, A. V.

    2009-10-01

    The structure of modern microprocessor systems for automated control of technological processes at cogeneration stations is considered. Methods for computer-aided designing of the lower (sensors and actuators) and upper (cabinets of computerized automation equipment) levels of an automated process control system are proposed. The composition of project documents, the structures of a project database and database of a computer-aided design system, and the way they interact with one another in the course of developing the project of an automated process control system are described. Elements of the interface between a design engineer and computer program are shown.

  9. A computational study of the addition of ReO3L (L = Cl(-), CH3, OCH3 and Cp) to ethenone.

    PubMed

    Aniagyei, Albert; Tia, Richard; Adei, Evans

    2016-01-01

    The periselectivity and chemoselectivity of the addition of transition metal oxides of the type ReO3L (L = Cl, CH3, OCH3 and Cp) to ethenone have been explored at the MO6 and B3LYP/LACVP* levels of theory. The activation barriers and reaction energies for the stepwise and concerted addition pathways involving multiple spin states have been computed. In the reaction of ReO3L (L = Cl(-), OCH3, CH3 and Cp) with ethenone, the concerted [2 + 2] addition of the metal oxide across the C=C and C=O double bond to form either metalla-2-oxetane-3-one or metalla-2,4-dioxolane is the most kinetically favored over the formation of metalla-2,5-dioxolane-3-one from the direct [3 + 2] addition pathway. The trends in activation and reaction energies for the formation of metalla-2-oxetane-3-one and metalla-2,4-dioxolane are Cp < Cl(-) < OCH3 < CH3 and Cp < OCH3 < CH3 < Cl(-) and for the reaction energies are Cp < OCH3 < Cl(-) < CH3 and Cp < CH3 < OCH3 < Cl CH3. The concerted [3 + 2] addition of the metal oxide across the C=C double of the ethenone to form species metalla-2,5-dioxolane-3-one is thermodynamically the most favored for the ligand L = Cp. The direct [2 + 2] addition pathways leading to the formations of metalla-2-oxetane-3-one and metalla-2,4-dioxolane is thermodynamically the most favored for the ligands L = OCH3 and Cl(-). The difference between the calculated [2 + 2] activation barriers for the addition of the metal oxide LReO3 across the C=C and C=O functionalities of ethenone are small except for the case of L = Cl(-) and OCH3. The rearrangement of the metalla-2-oxetane-3-one-metalla-2,5-dioxolane-3-one even though feasible, are unfavorable due to high activation energies of their rate-determining steps. For the rearrangement of the metalla-2-oxetane-3-one to metalla-2,5-dioxolane-3-one, the trends in activation barriers is found to follow the order OCH3 < Cl(-) < CH3 < Cp. The trends in the activation energies for

  10. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  11. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  12. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-06-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  13. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation

    PubMed Central

    Loewe, Axel; Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538

  14. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  15. Tandem β-elimination/hetero-michael addition rearrangement of an N-alkylated pyridinium oxime to an O-alkylated pyridine oxime ether: an experimental and computational study.

    PubMed

    Picek, Igor; Vianello, Robert; Šket, Primož; Plavec, Janez; Foretić, Blaženka

    2015-02-20

    A novel OH(-)-promoted tandem reaction involving C(β)-N(+)(pyridinium) cleavage and ether C(β)-O(oxime) bond formation in aqueous media has been presented. The study fully elucidates the fascinating reaction behavior of N-benzoylethylpyridinium-4-oxime chloride in aqueous media under mild reaction conditions. The reaction journey begins with the exclusive β-elimination and formation of pyridine-4-oxime and phenyl vinyl ketone and ends with the formation of O-alkylated pyridine oxime ether. A combination of experimental and computational studies enabled the introduction of a new type of rearrangement process that involves a unique tandem reaction sequence. We showed that (E)-O-benzoylethylpyridine-4-oxime is formed in aqueous solution by a base-induced tandem β-elimination/hetero-Michael addition rearrangement of (E)-N-benzoylethylpyridinium-4-oximate, the novel synthetic route to this engaging target class of compounds. The complete mechanistic picture of this rearrangement process was presented and discussed in terms of the E1cb reaction scheme within the rate-limiting β-elimination step. PMID:25562471

  16. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State.

    PubMed

    Stoop, Ruedi; Gomez, Florian

    2016-07-15

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information. PMID:27472144

  17. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State

    NASA Astrophysics Data System (ADS)

    Stoop, Ruedi; Gomez, Florian

    2016-07-01

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  18. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  19. Power and energy computational models for the design and simulation of hybrid-electric combat vehicles

    NASA Astrophysics Data System (ADS)

    Smith, Wilford; Nunez, Patrick

    2005-05-01

    This paper describes the work being performed under the RDECOM Power and Energy (P&E) program (formerly the Combat Hybrid Power System (CHPS) program) developing hybrid power system models and integrating them into larger simulations, such as OneSAF, that can be used to find duty cycles to feed designers of hybrid power systems. This paper also describes efforts underway to link the TARDEC P&E System Integration Lab (SIL) in San Jose CA to the TARDEC Ground Vehicle Simulation Lab (GVSL) in Warren, MI. This linkage is being performed to provide a methodology for generating detailed driver profiles for use in the development of vignettes and mission profiles for system design excursions.

  20. Computational fluid dynamics study on mixing mode and power consumption in anaerobic mono- and co-digestion.

    PubMed

    Zhang, Yuan; Yu, Guangren; Yu, Liang; Siddhu, Muhammad Abdul Hanan; Gao, Mengjiao; Abdeltawab, Ahmed A; Al-Deyab, Salem S; Chen, Xiaochun

    2016-03-01

    Computational fluid dynamics (CFD) was applied to investigate mixing mode and power consumption in anaerobic mono- and co-digestion. Cattle manure (CM) and corn stover (CS) were used as feedstock and stirred tank reactor (STR) was used as digester. Power numbers obtained by the CFD simulation were compared with those from the experimental correlation. Results showed that the standard k-ε model was more appropriate than other turbulence models. A new index, net power production instead of gas production, was proposed to optimize feedstock ratio for anaerobic co-digestion. Results showed that flow field and power consumption were significantly changed in co-digestion of CM and CS compared with those in mono-digestion of either CM or CS. For different mixing modes, the optimum feedstock ratio for co-digestion changed with net power production. The best option of CM/CS ratio for continuous mixing, intermittent mixing I, and intermittent mixing II were 1:1, 1:1 and 1:3, respectively. PMID:26722816

  1. Improvement of Transient Voltage Responses using an Additional PID-loop on ANFIS-based Composite Controller-SVC (CC-SVC) to Control Chaos and Voltage Collapse in Power Systems

    NASA Astrophysics Data System (ADS)

    Ginarsa, I. Made; Soeprijanto, Adi; Purnomo, Mauridhi Hery; Syafaruddin, Mauridhi Hery; Hiyama, Takashi

    Chaos and voltage collapse are qualitative behaviors in power systems that exist due to lack of reactive power in critical loading. These phenomena are deeply explored using both detailed and approximate models in this paper. The ANFIS-based CC-SVC with an additional PID-loop was proposed to control these problems and to improve transient response of the detailed model. The main function of the PID-loop was to increase the minimum voltage and to decrease the settling time at transient response. The ANFIS-based method was chosen because its computational complexity was more efficient than Mamdani fuzzy logic controller. Therefore the convergence of training processes was more rapidly achieved by the ANFIS-based method. The load voltage was held to the setting value by adjusting the SVC susceptance properly. From the experimental results, the PID-loop was an effective controller which achieved good simulation result for the reactive load, the minimum voltage increased and the settling time decreased at the values of j0.12pu, 0.9435pu and 7.01s, respectively.

  2. Comparison of circular orbit and Fourier power series ephemeris representations for backup use by the upper atmosphere research satellite onboard computer

    NASA Technical Reports Server (NTRS)

    Kast, J. R.

    1988-01-01

    The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.

  3. On-shell effective field theory: A systematic tool to compute power corrections to the hard thermal loops

    NASA Astrophysics Data System (ADS)

    Manuel, Cristina; Soto, Joan; Stetina, Stephan

    2016-07-01

    We show that effective field theory techniques can be efficiently used to compute power corrections to the hard thermal loops in a high temperature T expansion. To this aim, we use the recently proposed on-shell effective field theory, which describes the quantum fluctuations around on-shell degrees of freedom. We provide the on-shell effective field theory Lagrangian up to third order in the energy expansion for QED and use it for the computation of power corrections to the retarded photon polarization tensor for soft external momenta. Here soft denotes a scale of order e T , where e is the gauge coupling constant. We develop the necessary techniques to perform these computations and study the contributions to the polarization tensor proportional to e2T2, e2T , and e2T0. The first one describes the hard thermal loop contribution, the second one vanishes, while the third one provides corrections of order e2 to the soft photon propagation. We check that the results agree with the direct calculation from QED, up to local pieces, as expected in an effective field theory.

  4. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  5. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  6. A computer controlled power tool for the servicing of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Richards, Paul W.; Konkel, Carl; Smith, Chris; Brown, Lee; Wagner, Ken

    1996-01-01

    The Hubble Space Telescope (HST) Pistol Grip Tool (PGT) is a self-contained, microprocessor controlled, battery-powered, 3/8-inch-drive hand-held tool. The PGT is also a non-powered ratchet wrench. This tool will be used by astronauts during Extravehicular Activity (EVA) to apply torque to the HST and HST Servicing Support Equipment mechanical interfaces and fasteners. Numerous torque, speed, and turn or angle limits are programmed into the PGT for use during various missions. Batteries are replaceable during ground operations, Intravehicular Activities, and EVA's.

  7. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  8. A Fast Algorithm for Computing Binomial Coefficients Modulo Powers of Two

    PubMed Central

    2013-01-01

    I present a new algorithm for computing binomial coefficients modulo 2N. The proposed method has an O(N3 · Multiplication(N) + N4) preprocessing time, after which a binomial coefficient C(P, Q) with 0 ≤ Q ≤ P ≤ 2N − 1 can be computed modulo 2N in O(N2 · log(N) · Multiplication(N)) time. Multiplication(N) denotes the time complexity of multiplying two N-bit numbers, which can range from O(N2) to O(N · log(N) · log(log(N))) or better. Thus, the overall time complexity for evaluating M binomial coefficients C(P, Q) modulo 2N with 0 ≤ Q ≤ P ≤ 2N − 1 is O((N3 + M · N2 · log(N)) · Multiplication(N) + N4). After preprocessing, we can actually compute binomial coefficients modulo any 2R with R ≤ N. For larger values of P and Q, variations of Lucas' theorem must be used first in order to reduce the computation to the evaluation of multiple (O(log⁡(P))) binomial coefficients C(P′, Q′) (or restricted types of factorials P′!) modulo 2N with 0 ≤ Q′ ≤ P′ ≤ 2N − 1. PMID:24348186

  9. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  10. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  11. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction. PMID:25570284

  12. Computational assessment of the influence of the overlap ratio on the power characteristics of a Classical Savonius wind turbine

    NASA Astrophysics Data System (ADS)

    Kacprzak, Konrad; Sobczak, Krzysztof

    2015-09-01

    An influence of the overlap on the performance of the Classical Savonius wind turbine was investigated. Unsteady two-dimensional numerical simulations were carried out for a wide range of overlap ratios. For selected configurations computation quality was verified by comparison with three-dimensional simulations and the wind tunnel experimental data available in literature. A satisfactory agreement was achieved. Power characteristics were determined for all the investigated overlap ratios for selected tip speed ratios. Obtained results indicate that the maximum device performance is achieved for the buckets overlap ratio close to 0.

  13. WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation

    SciTech Connect

    Han, D; Williamson, J; Siebers, J

    2014-06-15

    Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water and water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.

  14. Definitions of non-stationary vibration power for time-frequency analysis and computational algorithms based upon harmonic wavelet transform

    NASA Astrophysics Data System (ADS)

    Heo, YongHwa; Kim, Kwang-joon

    2015-02-01

    While the vibration power for a set of harmonic force and velocity signals is well defined and known, it is not as popular yet for a set of stationary random force and velocity processes, although it can be found in some literatures. In this paper, the definition of the vibration power for a set of non-stationary random force and velocity signals will be derived for the purpose of a time-frequency analysis based on the definitions of the vibration power for the harmonic and stationary random signals. The non-stationary vibration power, defined as the short-time average of the product of the force and velocity over a given frequency range of interest, can be calculated by three methods: the Wigner-Ville distribution, the short-time Fourier transform, and the harmonic wavelet transform. The latter method is selected in this paper because band-pass filtering can be done without phase distortions, and the frequency ranges can be chosen very flexibly for the time-frequency analysis. Three algorithms for the time-frequency analysis of the non-stationary vibration power using the harmonic wavelet transform are discussed. The first is an algorithm for computation according to the full definition, while the others are approximate. Noting that the force and velocity decomposed into frequency ranges of interest by the harmonic wavelet transform are constructed with coefficients and basis functions, for the second algorithm, it is suggested to prepare a table of time integrals of the product of the basis functions in advance, which are independent of the signals under analysis. How to prepare and utilize the integral table are presented. The third algorithm is based on an evolutionary spectrum. Applications of the algorithms to the time-frequency analysis of the vibration power transmitted from an excitation source to a receiver structure in a simple mechanical system consisting of a cantilever beam and a reaction wheel are presented for illustration.

  15. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  16. A Computational Method for Compressible Flows with Condensation in Power Plant Condensers

    NASA Astrophysics Data System (ADS)

    Takahashi, Fumio; Harada, Iwao

    A computational method for compressible flows with condensation was developed. Condensation was formulated by two thermodynamic equations of state for pressure and energy. These equations of state were simultaneously solved with the Euler equation and heat transfer equations. A finite volume method based on an approximate Riemann solver was adopted to solve the Euler equation. The computational method was applied to compressible flows in a condenser and a turbine exhaust hood. The flow regime changed widely from subsonic flow to transonic flow during a small decrease of cooling water temperature. Subcooling temperature from the annulus of the turbine blades to the condensate in the hot well was investigated. Results showed the subcooling temperature could be reduced by using an advanced steam guide which was designed to improve diffuser performance under widely changing conditions.

  17. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    PubMed Central

    Han, Dong; Siebers, Jeffrey V.; Williamson, Jeffrey F.

    2016-01-01

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl2 aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type

  18. Research on computer-aided design of modern marine power systems

    NASA Astrophysics Data System (ADS)

    Ding, Dongdong; Zeng, Fanming; Chen, Guojun

    2004-03-01

    To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.

  19. Logarithmic divergences in the k-inflationary power spectra computed through the uniform approximation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.; Kubota, Takahiro; Naylor, Wade

    2016-02-01

    We investigate a calculation method for solving the Mukhanov-Sasaki equation in slow-roll k-inflation based on the uniform approximation (UA) in conjunction with an expansion scheme for slow-roll parameters with respect to the number of e-folds about the so-called turning point. Earlier works on this method have so far gained some promising results derived from the approximating expressions for the power spectra among others, up to second order with respect to the Hubble and sound flow parameters, when compared to other semi-analytical approaches (e.g., Green's function and WKB methods). However, a closer inspection is suggestive that there is a problem when higher-order parts of the power spectra are considered; residual logarithmic divergences may come out that can render the prediction physically inconsistent. Looking at this possibility, we map out up to what order with respect to the mentioned parameters several physical quantities can be calculated before hitting a logarithmically divergent result. It turns out that the power spectra are limited up to second order, the tensor-to-scalar ratio up to third order, and the spectral indices and running converge to all orders. This indicates that the expansion scheme is incompatible with the working equations derived from UA for the power spectra but compatible with that of the spectral indices. For those quantities that involve logarithmically divergent terms in the higher-order parts, existing results in the literature for the convergent lower-order parts calculated in the equivalent fashion should be viewed with some caution; they do not rest on solid mathematical ground.

  20. The Meaning and Computation of Causal Power: Comment on Cheng (1997) and Novick and Cheng (2004)

    ERIC Educational Resources Information Center

    Luhmann, Christian C.; Ahn, Woo-kyoung

    2005-01-01

    D. Hume (1739/1987) argued that causality is not observable. P. W. Cheng claimed to present "a theoretical solution to the problem of causal induction first posed by Hume more than two and a half centuries ago" (p. 398) in the form of the power PC theory (L. R. Novick & P. W. Cheng). This theory claims that people's goal in causal induction is to…

  1. New approach for precise computation of Lyman-α forest power spectrum with hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Borde, Arnaud; Palanque-Delabrouille, Nathalie; Rossi, Graziano; Viel, Matteo; Bolton, James S.; Yèche, Christophe; LeGoff, Jean-Marc; Rich, Jim

    2014-07-01

    Current experiments are providing measurements of the flux power spectrum from the Lyman-α forests observed in quasar spectra with unprecedented accuracy. Their interpretation in terms of cosmological constraints requires specific simulations of at least equivalent precision. In this paper, we present a suite of cosmological N-body simulations with cold dark matter and baryons, specifically aiming at modeling the low-density regions of the inter-galactic medium as probed by the Lyman-α forests at high redshift. The simulations were run using the GADGET-3 code and were designed to match the requirements imposed by the quality of the current SDSS-III/BOSS or forthcoming SDSS-IV/eBOSS data. They are made using either 2 × 7683 simeq 1 billion or 2 × 1923 simeq 14 million particles, spanning volumes ranging from (25 Mpc h-1)3 for high-resolution simulations to (100 Mpc h-1)3 for large-volume ones. Using a splicing technique, the resolution is further enhanced to reach the equivalent of simulations with 2 × 30723 simeq 58 billion particles in a (100 Mpc h-1)3 box size, i.e. a mean mass per gas particle of 1.2 × 105Msolar h-1. We show that the resulting power spectrum is accurate at the 2% level over the full range from a few Mpc to several tens of Mpc. We explore the effect on the one-dimensional transmitted-flux power spectrum of four cosmological parameters (ns, σ8, Ωm and H0) and two astrophysical parameters (T0 and γ) that are related to the heating rate of the intergalactic medium. By varying the input parameters around a central model chosen to be in agreement with the latest Planck results, we built a grid of simulations that allows the study of the impact on the flux power spectrum of these six relevant parameters. We improve upon previous studies by not only measuring the effect of each parameter individually, but also probing the impact of the simultaneous variation of each pair of parameters. We thus provide a full second-order expansion, including

  2. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  3. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  4. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  5. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  6. Emergent Power-Law Phase in the 2D Heisenberg Windmill Antiferromagnet: A Computational Experiment.

    PubMed

    Jeevanesan, Bhilahari; Chandra, Premala; Coleman, Piers; Orth, Peter P

    2015-10-23

    In an extensive computational experiment, we test Polyakov's conjecture that under certain circumstances an isotropic Heisenberg model can develop algebraic spin correlations. We demonstrate the emergence of a multispin U(1) order parameter in a Heisenberg antiferromagnet on interpenetrating honeycomb and triangular lattices. The correlations of this relative phase angle are observed to decay algebraically at intermediate temperatures in an extended critical phase. Using finite-size scaling we show that both phase transitions are of the Berezinskii-Kosterlitz-Thouless type, and at lower temperatures we find long-range Z(6) order. PMID:26551137

  7. Food additives

    MedlinePlus

    Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...

  8. Four-state straintronics: Ultra low-power collective nanomagnetic computing using multiferroics with biaxial anisotropy

    NASA Astrophysics Data System (ADS)

    D'Souza, Noel; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo

    2012-02-01

    Two-phase multiferroic nanomagnets, consisting of elastically coupled magnetostrictive/piezoelectric layers, can be endowed with four stable magnetization states by introducing biaxial magnetocrystalline anisotropy in the magnetostrictive layer. These states can encode four logic bits. We show through extensive modeling that dipole coupling between such 4-state magnets, combined with stress sequences that appropriately modulate the energy barriers between the stable states through magnetoelastic coupling, can be used to realize 4-state NOR logic (J. Phys. D: Appl. Phys. 44, 265001 (2011)) as well as unidirectional propagation of logic bits along a ``wire'' of nanomagnets (arXiv:1105.1818). As very little energy is consumed to ``compute'' in such a system, this could emerge as an ultra-efficient computing paradigm with high logic density. We show, by solving the Landau-Lifshitz-Gilbert (LLG) equation, that such nanomagnet arrays can be used for ultrafast image reconstruction and pattern recognition that go beyond simple Boolean logic. The image processing attribute is derived from the thermodynamic evolution in time, without involving any software. This work is supported by the NSF under grant ECCS-1124714 and VCU under PRIP.

  9. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    SciTech Connect

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  10. The power of virtual integration: an interview with Dell Computer's Michael Dell. Interview by Joan Magretta.

    PubMed

    Dell, M

    1998-01-01

    Michael Dell started his computer company in 1984 with a simple business insight. He could bypass the dealer channel through which personal computers were then being sold and sell directly to customers, building products to order. Dell's direct model eliminated the dealer's markup and the risks associated with carrying large inventories of finished goods. In this interview, Michael Dell provides a detailed description of how his company is pushing that business model one step further, toward what he calls virtual integration. Dell is using technology and information to blur the traditional boundaries in the value chain between suppliers, manufacturers, and customers. The individual pieces of Dell's strategy--customer focus, supplier partnerships, mass customization, just-in-time manufacturing--may be all be familiar. But Michael Dell's business insight into how to combine them is highly innovative. Direct relationships with customers create valuable information, which in turn allows the company to coordinate its entire value chain back through manufacturing to product design. Dell describes how his company has come to achieve this tight coordination without the "drag effect" of ownership. Dell reaps the advantages of being vertically integrated without incurring the costs, all the while achieving the focus, agility, and speed of a virtual organization. As envisioned by Michael Dell, virtual integration may well become a new organizational model for the information age. PMID:10177868

  11. Piezoelectronics: a novel, high-performance, low-power computer switching technology

    NASA Astrophysics Data System (ADS)

    Newns, D. M.; Martyna, G. J.; Elmegreen, B. G.; Liu, X.-H.; Theis, T. N.; Trolier-McKinstry, S.

    2012-06-01

    Current switching speeds in CMOS technology have saturated since 2003 due to power constraints arising from the inability of line voltage to be further lowered in CMOS below about 1V. We are developing a novel switching technology based on piezoelectrically transducing the input or gate voltage into an acoustic wave which compresses a piezoresistive (PR) material forming the device channel. Under pressure the PR undergoes an insulator-to-metal transition which makes the channel conducting, turning on the device. A piezoelectric (PE) transducer material with a high piezoelectric coefficient, e.g. a domain-engineered relaxor piezoelectric, is needed to achieve low voltage operation. Suitable channel materials manifesting a pressure-induced metal-insulator transition can be found amongst rare earth chalcogenides, transition metal oxides, etc.. Mechanical requirements include a high PE/PR area ratio to step up pressure, a rigid surround material to constrain the PE and PR external boundaries normal to the strain axis, and a void space to enable free motion of the component side walls. Using static mechanical modeling and dynamic electroacoustic simulations, we optimize device structure and materials and predict performance. The device, termed a PiezoElectronic Transistor (PET) can be used to build complete logic circuits including inverters, flip-flops, and gates. This "Piezotronic" logic is predicted to have a combination of low power and high speed operation.

  12. SUNBURN: A computer code for evaluating the economic viability of hybrid solar central receiver electric power plants

    SciTech Connect

    Chiang, C.J.

    1987-06-01

    The computer program SUNBURN simulates the annual performance of solar-only, solar-hybrid, and fuel-only electric power plants. SUNBURN calculates the levelized value of electricity generated by, and the levelized cost of, these plants. Central receiver solar technology is represented, with molten salt as the receiver coolant and thermal storage medium. For each hour of a year, the thermal energy use, or dispatch, strategy of SUNBURN maximizes the value of electricity by operating the turbine when the demand for electricity is greatest and by minimizing overflow of thermal storage. Fuel is burned to augment solar energy if the value of electricity generated by using fuel is greater than the cost of the fuel consumed. SUNBURN was used to determine the optimal power plant configuration, based on value-to-cost ratio, for dates of initial plant operation from 1990 to 1998. The turbine size for all plants was 80 MWe net. Before 1994, fuel-only was found to be the preferred plant configuration. After 1994, a solar-only plant was found to have the greatest value-to-cost ratio. A hybrid configuration was never found to be better than both fuel-only and solar-only configurations. The value of electricity was calculated as The Southern California Edison Company's avoided generation costs of electricity. These costs vary with time of day. Utility ownership of the power plants was assumed. The simulation was performed using weather data recorded in Barstow, California, in 1984.

  13. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  14. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  15. Electronic stopping power calculation for water under the Lindhard formalism for application in proton computed tomography

    NASA Astrophysics Data System (ADS)

    Guerrero, A. F.; Mesa, J.

    2016-07-01

    Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.

  16. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  17. Optical computing for application to reducing the thickness of high-power-composite lenses.

    PubMed

    Wu, Bo-Wen

    2014-10-10

    With the adoption of polycarbonate lens material for injection molding of greater accuracy and at lower costs, polycarbonate has become very suitable for mass production of more economical products, such as diving goggles. However, with increasing requirements for visual quality improvement, lenses need to have not only refractive function but also thickness and spherical aberration, which are gradually being taken more seriously. For a high-power-composite lens, meanwhile, the thickness cannot be substantially reduced, and there is also the issue of severe spherical aberration at the lens edges. In order to increase the added value of the product without changing the material, the present research applied the eye model and Taguchi experiment method, combined with design optimization for hyperbolic-aspherical lens, to significantly reduce the lens thickness by more than 30%, outperforming the average thickness reduction in general aspherical lens. The spherical aberration at the lens edges was also reduced effectively during the optimization process for the nonspherical lens. Prototypes made by super-finishing machines were among the results of the experiment. This new application can be used in making a large amount of injection molds to substantially increase the economic value of the product. PMID:25322434

  18. Computational Work to Support FAP/SRW Variable-Speed Power-Turbine Development

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to document the work done to enable a NASA CFD code to model the transition on a blade. The purpose of the present work is to down-select a transition model that would allow the flow simulation of a Variable-Speed Power-Turbine (VSPT) to be accurately performed. The modeling is to be ultimately performed to also account for the blade row interactions and effect on transition and therefore accurate accounting for losses. The present work is limited to steady flows. The low Reynolds number k-omega model of Wilcox and a modified version of same will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k-omega model and its modified variant fail to simulate the transition with any degree of accuracy. A case is therefore made for more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored and the Walters and Leylek model which was thought to be in a more mature state of development is introduced and implemented in the Glenn-HT code. Two-dimensional flat plate results and three-dimensional results for flow over turbine blades and the resulting heat transfer and its transitional behavior are reported. It is shown that the transition simulation is much improved over the baseline k-omega model.

  19. Nuclear power plant human computer interface design incorporating console simulation, operations personnel, and formal evaluation techniques

    SciTech Connect

    Chavez, C.; Edwards, R.M.; Goldberg, J.H.

    1993-12-31

    New CRT-based information displays which enhance the human machine interface are playing a very important role and are being increasingly used in control rooms since they present a higher degree of flexibility compared to conventional hardwired instrumentation. To prototype a new console configuration and information display system at the Experimental Breeder Reactor II (EBR-II), an iterative process of console simulation and evaluation involving operations personnel is being pursued. Entire panels including selector switches and information displays are simulated and driven by plant dynamical simulations with realistic responses that reproduce the actual cognitive and physical environment. Careful analysis and formal evaluation of operator interaction while using the simulated console will be conducted to determine underlying principles for effective control console design for this particular group of operation personnel. Additional iterations of design, simulation, and evaluation will then be conducted as necessary.

  20. 18 CFR 33.3 - Additional information requirements for applications involving horizontal competitive impacts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the horizontal Competitive Analysis Screen. (3) The applicant may use a computer model to complete one... requirements for applications involving horizontal competitive impacts. 33.3 Section 33.3 Conservation of Power... FEDERAL POWER ACT APPLICATIONS UNDER FEDERAL POWER ACT SECTION 203 § 33.3 Additional...

  1. Computational mechanics

    SciTech Connect

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  2. QXP: powerful, rapid computer algorithms for structure-based drug design.

    PubMed

    McMartin, C; Bohacek, R S

    1997-07-01

    New methods for docking, template fitting and building pseudo-receptors are described. Full conformational searches are carried out for flexible cyclic and acyclic molecules. QXP (quick explore) search algorithms are derived from the method of Monte Carlo perturbation with energy minimization in Cartesian space. An additional fast search step is introduced between the initial perturbation and energy minimization. The fast search produces approximate low-energy structures, which are likely to minimize to a low energy. For template fitting, QXP uses a superposition force field which automatically assigns short-range attractive forces to similar atoms in different molecules. The docking algorithms were evaluated using X-ray data for 12 protein-ligand complexes. The ligands had up to 24 rotatable bonds and ranged from highly polar to mostly nonpolar. Docking searches of the randomly disordered ligands gave rms differences between the lowest energy docked structure and the energy-minimized X-ray structure, of less than 0.76 A for 10 of the ligands. For all the ligands, the rms difference between the energy-minimized X-ray structure and the closest docked structure was less than 0.4 A, when parts of one of the molecules which are in the solvent were excluded from the rms calculation. Template fitting was tested using four ACE inhibitors. Three ACE templates have been previously published. A single run using QXP generated a series of templates which contained examples of each of the three. A pseudo-receptor, complementary to an ACE template, was built out of small molecules, such as pyrrole, cyclopentanone and propane. When individually energy minimized in the pseudo-receptor, each of the four ACE inhibitors moved with an rms of less than 0.25 A. After random perturbation, the inhibitors were docked into the pseudo-receptor. Each lowest energy docked structure matched the energy-minimized geometry with an rms of less than 0.08 A. Thus, the pseudo-receptor shows steric and

  3. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  4. Additivity of Factor Effects in Reading Tasks Is Still a Challenge for Computational Models: Reply to Ziegler, Perry, and Zorzi (2009)

    ERIC Educational Resources Information Center

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…

  5. Computer modelling integrated with micro-CT and material testing provides additional insight to evaluate bone treatments: Application to a beta-glycan derived whey protein mice model.

    PubMed

    Sreenivasan, D; Tu, P T; Dickinson, M; Watson, M; Blais, A; Das, R; Cornish, J; Fernandez, J

    2016-01-01

    The primary aim of this study was to evaluate the influence of a whey protein diet on computationally predicted mechanical strength of murine bones in both trabecular and cortical regions of the femur. There was no significant influence on mechanical strength in cortical bone observed with increasing whey protein treatment, consistent with cortical tissue mineral density (TMD) and bone volume changes observed. Trabecular bone showed a significant decline in strength with increasing whey protein treatment when nanoindentation derived Young׳s moduli were used in the model. When microindentation, micro-CT phantom density or normalised Young׳s moduli were included in the model a non-significant decline in strength was exhibited. These results for trabecular bone were consistent with both trabecular bone mineral density (BMD) and micro-CT indices obtained independently. The secondary aim of this study was to characterise the influence of different sources of Young׳s moduli on computational prediction. This study aimed to quantify the predicted mechanical strength in 3D from these sources and evaluate if trends and conclusions remained consistent. For cortical bone, predicted mechanical strength behaviour was consistent across all sources of Young׳s moduli. There was no difference in treatment trend observed when Young׳s moduli were normalised. In contrast, trabecular strength due to whey protein treatment significantly reduced when material properties from nanoindentation were introduced. Other material property sources were not significant but emphasised the strength trend over normalised material properties. This shows strength at the trabecular level was attributed to both changes in bone architecture and material properties. PMID:26599826

  6. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  7. Phosphazene additives

    SciTech Connect

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  8. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  9. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Articles Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    It is important to realize that some test-articles may have significant sound absorption that may challenge the acoustic power capabilities of a test facility. Therefore, to mitigate this risk of not being able to meet the customers target spectrum, it is prudent to demonstrate early-on an increased acoustic power capability which compensates for this test-article absorption. This paper describes a concise method to reduce this risk when testing aerospace test-articles which have significant absorption. This method was successfully applied during the SpaceX Falcon 9 Payload Fairing acoustic test program at the NASA Glenn Research Center Plum Brook Stations RATF.

  10. Do We Really Need Additional Contrast-Enhanced Abdominal Computed Tomography for Differential Diagnosis in Triage of Middle-Aged Subjects With Suspected Biliary Pain

    PubMed Central

    Hwang, In Kyeom; Lee, Yoon Suk; Kim, Jaihwan; Lee, Yoon Jin; Park, Ji Hoon; Hwang, Jin-Hyeok

    2015-01-01

    Abstract Enhanced computed tomography (CT) is widely used for evaluating acute biliary pain in the emergency department (ED). However, concern about radiation exposure from CT has also increased. We investigated the usefulness of pre-contrast CT for differential diagnosis in middle-aged subjects with suspected biliary pain. A total of 183 subjects, who visited the ED for suspected biliary pain from January 2011 to December 2012, were included. Retrospectively, pre-contrast phase and multiphase CT findings were reviewed and the detection rate of findings suggesting disease requiring significant treatment by noncontrast CT (NCCT) was compared with cases detected by multiphase CT. Approximately 70% of total subjects had a significant condition, including 1 case of gallbladder cancer and 126 (68.8%) cases requiring intervention (122 biliary stone-related diseases, 3 liver abscesses, and 1 liver hemangioma). The rate of overlooking malignancy without contrast enhancement was calculated to be 0% to 1.5%. Biliary stones and liver space-occupying lesions were found equally on NCCT and multiphase CT. Calculated probable rates of overlooking acute cholecystitis and biliary obstruction were maximally 6.8% and 4.2% respectively. Incidental significant finding unrelated with pain consisted of 1 case of adrenal incidentaloma, which was also observed in NCCT. NCCT might be sufficient to detect life-threatening or significant disease requiring early treatment in young adults with biliary pain. PMID:25700321

  11. Thermal noise informatics: totally secure communication via a wire, zero-power communication, and thermal noise driven computing

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Mingesz, Robert; Gingl, Zoltan

    2007-06-01

    Very recently, it has been shown that Gaussian thermal noise and its artificial versions (Johnson-like noises) can be utilized as an information carrier with peculiar properties therefore it may be proper to call this topic Thermal Noise Informatics. Zero Power (Stealth) Communication, Thermal Noise Driven Computing, and Totally Secure Classical Communication are relevant examples. In this paper, while we will briefly describe the first and the second subjects, we shall focus on the third subject, the secure classical communication via wire. This way of secure telecommunication utilizes the properties of Johnson(-like) noise and those of a simple Kirchhoff's loop. The communicator is unconditionally secure at the conceptual (circuit theoretical) level and this property is (so far) unique in communication systems based on classical physics. The communicator is superior to quantum alternatives in all known aspects, except the need of using a wire. In the idealized system, the eavesdropper can extract zero bit of information without getting uncovered. The scheme is naturally protected against the man-in-the-middle attack. The communication can take place also via currently used power lines or phone (wire) lines and it is not only a point-to-point communication like quantum channels but network-ready. We report that a pair of Kirchhoff-Loop-Johnson(-like)-Noise communicators, which is able to work over variable ranges, was designed and built. Tests have been carried out on a model-line with ranges beyond the ranges of any known direct quantum communication channel and they indicate unrivalled signal fidelity and security performance. This simple device has single-wire secure key generation/sharing rates of 0.1, 1, 10, and 100 bit/second for copper wires with diameters/ranges of 21 mm / 2000 km, 7 mm / 200 km, 2.3 mm / 20 km, and 0.7 mm / 2 km, respectively and it performs with 0.02% raw-bit error rate (99.98 % fidelity). The raw-bit security of this practical system

  12. Computer proposals

    NASA Astrophysics Data System (ADS)

    Richman, Barbara T.

    To expand the research community's access to supercomputers, the National Science Foundation (NSF) has begun a program to match researchers who require the capabilities of a supercomputer with those facilities that have such computer resources available.Recent studies on computer needs in scientific and engineering research underscore the need for greater access to supercomputers (Eos, July 6, 1982, p. 562), especially those categorized as “Class VI” machines. Complex computer models for research on astronomy, the oceans, and the atmosphere often require such capabilities. In addition, similar needs are emerging in the earth sciences: A Union session at the AGU Fall Meeting in San Francisco this week will focus on the research computing needs of the geosciences. A Class VI supercomputer has a memory capacity of at least 1 megaword, a speed of upwards of 100 MFLOPS (million floating point operations per second), and both scalar and vector registers in the CPU (central processing unit). Examples of Class VI machines are the CRAY-1 and the CYBER 205. The high costs o f these machines, the most powerful ones available, preclude most research facilities from owning one.

  13. A Hierarchical Examination of the Immigrant Achievement Gap: The Additional Explanatory Power of Nationality and Educational Selectivity over Traditional Explorations of Race and Socioeconomic Status

    ERIC Educational Resources Information Center

    Simms, Kathryn

    2012-01-01

    This study compared immigrant and nonimmigrant educational achievement (i.e., the immigrant gap) in math by reexamining the explanatory power of race and socioeconomic status (SES)--two variables, perhaps, most commonly considered in educational research. Four research questions were explored through growth curve modeling, factor analysis, and…

  14. Computer simulation for the growing probability of additional offspring with an advantageous reversal allele in the decoupled continuous-time mutation-selection model

    NASA Astrophysics Data System (ADS)

    Gill, Wonpyong

    2016-01-01

    This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.

  15. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customer's aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facility's available acoustic power capability becomes maximized with the test-article installed during the actual test then the customer's environment requirement may become compromised. In order to understand the risk of not achieving the customer's in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Station's Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  16. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption during Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customers aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facilitys available acoustic power capability becomes maximized with the test-article installed during the actual test then the customers environment requirement may become compromised. In order to understand the risk of not achieving the customers in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Stations Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  17. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  18. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity atmore » characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

  19. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    SciTech Connect

    Gering, Kevin L.

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity at characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

  20. Computer simulation of noncondensible gas behavior in geothermal power plants utilizing direct contact heat exchange. Report of work, February 1, 1980-February 28, 1981

    SciTech Connect

    Perona, J.J.

    1981-01-01

    A computer model was developed to simulate the behavior of carbon dioxide and hydrogen sulfide in a geothermal power plant using direct contact heat exchange with isobutane as a working fluid. This computer program was modified to simulate the particular equipment characteristics of the 500 kW direct contact pilot plant at East Mesa. Vapor and liquid compositions and temperatures can be calculated throughout the heat exchangers in the pilot plant. The program is now available for analysis of the pilot plant operation and for design of similar plants.

  1. Computational electronics and electromagnetics

    SciTech Connect

    Shang, C. C.

    1997-02-01

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domain CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.

  2. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom

    SciTech Connect

    Moyers, M. F.

    2014-06-15

    Purpose: Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. Methods: A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. Results: For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. Conclusions: The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote

  3. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom1

    PubMed Central

    Moyers, M. F.

    2014-01-01

    Purpose: Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. Methods: A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. Results: For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. Conclusions: The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote

  4. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  5. Additive attacks on speaker recognition

    NASA Astrophysics Data System (ADS)

    Farrokh Baroughi, Alireza; Craver, Scott

    2014-02-01

    Speaker recognition is used to identify a speaker's voice from among a group of known speakers. A common method of speaker recognition is a classification based on cepstral coefficients of the speaker's voice, using a Gaussian mixture model (GMM) to model each speaker. In this paper we try to fool a speaker recognition system using additive noise such that an intruder is recognized as a target user. Our attack uses a mixture selected from a target user's GMM model, inverting the cepstral transformation to produce noise samples. In our 5 speaker data base, we achieve an attack success rate of 50% with a noise signal at 10dB SNR, and 95% by increasing noise power to 0dB SNR. The importance of this attack is its simplicity and flexibility: it can be employed in real time with no processing of an attacker's voice, and little computation is needed at the moment of detection, allowing the attack to be performed by a small portable device. For any target user, knowing that user's model or voice sample is sufficient to compute the attack signal, and it is enough that the intruder plays it while he/she is uttering to be classiffed as the victim.

  6. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  7. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  8. An improved computational technique for calculating electromagnetic forces and power absorptions generated in spherical and deformed body in levitation melting devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results of previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  9. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  10. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator followed

  11. Computational Design and Prototype Evaluation of Aluminide-Strengthened Ferritic Superalloys for Power-Generating Turbine Applications up to 1,033 K

    SciTech Connect

    Peter Liaw; Gautam Ghosh; Mark Asta; Morris Fine; Chain Liu

    2010-04-30

    prototype Fe-Ni-Cr-Al-Mo alloys. Three-point-bending experiments show that alloys containing more than 5 wt.% Al exhibit poor ductility (< 2%) at room temperature, and their fracture mode is predominantly of a cleavage type. Two major factors governing the poor ductility are (1) the volume fraction of NiAl-type precipitates, and (2) the Al content in the {alpha}-Fe matrix. A bend ductility of more than 5% can be achieved by lowering the Al concentration to 3 wt.% in the alloy. The alloy containing about 6.5 wt.% Al is found to have an optimal combination of hardness, ductility, and minimal creep rate at 973 K. A high volume fraction of precipitates is responsible for the good creep resistance by effectively resisting the dislocation motion through Orowan-bowing and dislocation-climb mechanisms. The effects of stress on the creep rate have been studied. With the threshold-stress compensation, the stress exponent is determined to be 4, indicating power-law dislocation creep. The threshold stress is in the range of 40-53 MPa. The addition of W can significantly reduce the secondary creep rates. Compared to other candidates for steam-turbine applications, FBB-8 does not show superior creep resistance at high stresses (> 100 MPa), but exhibit superior creep resistance at low stresses (< 60 MPa).

  12. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  13. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. PMID:26774388

  14. Evaluation of computer-aided foundation design techniques for fossil fuel power plants. Final report. [Includes list of firms involved, equipment, software, etc

    SciTech Connect

    Kulhawy, F.H.; Dill, J.C.; Trautmann, C.H.

    1984-11-01

    The use of an integrated computer-aided drafting and design system for fossil fuel power plant foundations would offer utilities considerable savings in engineering costs and design time. The technology is available, but research is needed to develop software, a common data base, and data management procedures. An integrated CADD system suitable for designing power plant foundations should include the ability to input, display, and evaluate geologic, geophysical, geotechnical, and survey field data; methods for designing piles, mats, footings, drilled shafts, and other foundation types; and the capability of evaluating various load configurations, soil-structure interactions, and other construction factors that influence design. Although no such integrated system exists, the survey of CADD techniques showed that the technology is available to computerize the whole foundation design process, from single-foundation analysis under single loads to three-dimensional analysis under earthquake loads. The practices of design firms using CADD technology in nonutility applications vary widely. Although all the firms surveyed used computer-aided drafting, only two used computer graphics in routine design procedures, and none had an integrated approach to using CADD for geotechnical engineering. All the firms had developed corporate policies related to system security, supervision, overhead allocation, training, and personnel compensation. A related EPRI project RP2514, is developing guidelines for applying CADD systems to entire generating-plant construction projects. 4 references, 6 figures, 6 tables.

  15. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  16. Assessment of solar options for small power systems applications. Volume V. SOLSTEP: a computer model for solar plant system simulations

    SciTech Connect

    Bird, S.P.

    1980-09-01

    The simulation code, SOLSTEP, was developed at the Pacific Northwest Laboratory to facilitate the evaluation of proposed designs for solar thermal power plants. It allows the user to analyze the thermodynamic and economic performance of a conceptual design for several field size-storage capacity configurations. This feature makes it possible to study the levelized energy cost of a proposed concept over a range of plant capacity factors. The thermodynamic performance is analyzed on a time step basis using actual recorded meteorological and insolation data for specific geographic locations. The flexibility of the model enables the user to analyze both central and distributed generation concepts using either thermal or electric storage systems. The thermodynamic and economic analyses view the plant in a macroscopic manner as a combination of component subsystems. In the thermodynamic simulation, concentrator optical performance is modeled as a function of solar position; other aspects of collector performance can optionally be treated as functions of ambient air temperature, wind speed, and component power level. The power conversion model accounts for the effects of ambient air temperature, partial load operation, auxiliary power demands, and plant standby and startup energy requirements. The code was designed in a modular fashion to provide efficient evaluations of the collector system, total plant, and system economics. SOLSTEP has been used to analyze a variety of solar thermal generic concepts involving several collector types and energy conversion and storage subsystems. The code's straightforward models and modular nature facilitated simple and inexpensive parametric studies of solar thermal power plant performance.

  17. Computer-aided modeling and prediction of performance of the modified Lundell class of alternators in space station solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Demerdash, Nabeel A. O.; Wang, Ren-Hong

    1988-01-01

    The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.

  18. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  19. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  20. Applications of the computer codes FLUX2D and PHI3D for the electromagnetic analysis of compressed magnetic field generators and power flow channels

    SciTech Connect

    Hodgdon, M.L.; Oona, H.; Martinez, A.R.; Salon, S.; Wendling, P.; Krahenbuhl, L.; Nicolas, A.; Nicolas, L.

    1989-01-01

    We present herein the results of three electromagnetic field problems for compressed magnetic field generators and their associated power flow channels. The first problem is the computation of the transient magnetic field in a two-dimensional model of helical generator during loading. The second problem is the three-dimensional eddy current patterns in a section of an armature beneath a bifurcation point of a helical winding. Our third problem is the calculation of the three-dimensional electrostatic fields in a region known as the post-hole convolute in which a rod connects the inner and outer walls of a system of three concentric cylinders through a hole in the middle cylinder. While analytic solutions exist for many electromagnetic field problems in cases of special and ideal geometries, the solutions of these and similar problems for the proper analysis and design of compressed magnetic field generators and their related hardware require computer simulations. In earlier studies, computer models have been proposed, several based on research oriented hydrocodes to which uncoupled or partially coupled Maxwell's equations solvers are added. Although the hydrocode models address the problem of moving, deformable conductors, they are not useful for electromagnetic analysis, nor can they be considered design tools. For our studies, we take advantage of the commercial, electromagnetic computer-aided design software packages FLUX2D nd PHI3D that were developed for motor manufacturers and utilities industries. 4 refs., 6 figs.

  1. A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface

    SciTech Connect

    Glueck, P.R.; Bahrami, K.A.

    1995-12-31

    The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar array short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.

  2. PLANETSYS, a Computer Program for the Steady State and Transient Thermal Analysis of a Planetary Power Transmission System: User's Manual

    NASA Technical Reports Server (NTRS)

    Hadden, G. B.; Kleckner, R. J.; Ragen, M. A.; Dyba, G. J.; Sheynin, L.

    1981-01-01

    The material presented is structured to guide the user in the practical and correct implementation of PLANETSYS which is capable of simulating the thermomechanical performance of a multistage planetary power transmission. In this version of PLANETSYS, the user can select either SKF or NASA models in calculating lubricant film thickness and traction forces.

  3. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  4. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  5. Computational Study of the Structure, the Flexibility, and the Electronic Circular Dichroism of Staurosporine - a Powerful Protein Kinase Inhibitor

    NASA Astrophysics Data System (ADS)

    Karabencheva-Christova, Tatyana G.; Singh, Warispreet; Christov, Christo Z.

    2014-07-01

    Staurosporine (STU) is a microbial alkaloid which is an universal kinase inhibitor. In order to understand its mechanism of action it is important to explore its structure-properties relationships. In this paper we provide the results of a computational study of the structure, the chiroptical properties, the conformational flexibility of STU as well as the correlation between the electronic circular dichroism (ECD) spectra and the structure of its complex with anaplastic lymphoma kinase.

  6. Approach to reduce the computational image processing requirements for a computer vision system using sensor preprocessing and the Hotelling transform

    NASA Astrophysics Data System (ADS)

    Schei, Thomas R.; Wright, Cameron H. G.; Pack, Daniel J.

    2005-03-01

    We describe a new development approach to computer vision for a compact, low-power, real-time system such as mobile robots. We take advantage of preprocessing in a biomimetic vision sensor and employ a computational strategy using subspace methods and the Hotelling transform in an effort to reduce the computational imaging load. The combination, while providing an overall reduction in the computational imaging requirements, is not optimized to each other and requires additional investigation.

  7. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  8. A Summary Description of a Computer Program Concept for the Design and Simulation of Solar Pond Electric Power Generation Systems

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.

  9. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    SciTech Connect

    Abrahamson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  10. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 2, User`s guide and manual

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  11. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 1, Equations and numerics

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User`s Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  12. Transition Metal Diborides as Electrode Material for MHD Direct Power Extraction: High-temperature Oxidation of ZrB2-HfB2 Solid Solution with LaB6 Addition

    NASA Astrophysics Data System (ADS)

    Sitler, Steven; Hill, Cody; Raja, Krishnan S.; Charit, Indrajit

    2016-04-01

    Transition metal borides are being considered for use as potential electrode coating materials in magnetohydrodynamic direct power extraction plants from coal-fired plasma. These electrode materials will be exposed to aggressive service conditions at high temperatures. Therefore, high-temperature oxidation resistance is an important property. Consolidated samples containing an equimolar solid solution of ZrB2-HfB2 with and without the addition of 1.8 mol pct LaB6 were prepared by ball milling of commercial boride material followed by spark plasma sintering. These samples were oxidized at 1773 K (1500 °C) in two different conditions: (1) as-sintered and (2) anodized (10 V in 0.1 M KOH electrolyte). Oxidation studies were carried out in 0.3 × 105 and 0.1 Pa oxygen partial pressures. The anodic oxide layers showed hafnium enrichment on the surface of the samples, whereas the high-temperature oxides showed zirconium enrichment. The anodized samples without LaB6 addition showed about 2.5 times higher oxidation resistance in high-oxygen partial pressures than the as-sintered samples. Addition of LaB6 improved the oxidation resistance in the as-sintered condition by about 30 pct in the high-oxygen partial pressure tests.

  13. Influence of the additional p+ doped layers on the properties of AlGaAs/InGaAs/AlGaAs heterostructures for high power SHF transistors

    NASA Astrophysics Data System (ADS)

    Gulyaev, D. V.; Zhuravlev, K. S.; Bakarov, A. K.; Toropov, A. I.; Protasov, D. Yu; Gutakovskii, A. K.; Ber, B. Ya; Kazantsev, D. Yu

    2016-03-01

    The peculiarities of a new type of pseudomorphic AlGaAs/InGaAs/AlGaAs heterostructures with the additional acceptor doping of barriers used for the creation of the power SHF pseudomorphic high electron mobility transistor (pHEMT) have been studied. A comparison of the transport characteristic of the new and typical pHEMT heterostructures was carried out. The influence of the doped acceptor impurities in the AlGaAs barriers of the new pHEMT heterostructure on the transport properties was studied. It was shown that the application of the additional p+ doped barrier layers allows the achievement of a double multiplex increase in the two-dimensional electron gas (2DEG) concentration in the InGaAs quantum well with no parasite parallel conductivity in the AlGaAs barrier layers. An estimation of the concentration of the doped donors and acceptors penetrating into the deliberately undoped InGaAs quantum well from the AlGaAs barriers was performed by second ion mass spectrometry and photoluminescence spectrometry methods. Taking into account the electron scattering by the ionized impurity atoms, calculation of the electron mobility in the InGaAs channel showed that some reduction of the electron mobility results from scattering by the ionized Si donor due to an increase in the Si concentration and, therefore, is not caused by the application of additional p+ doped layers in the construction of pHEMT heterostructures.

  14. Transition Metal Diborides as Electrode Material for MHD Direct Power Extraction: High-temperature Oxidation of ZrB2-HfB2 Solid Solution with LaB6 Addition

    NASA Astrophysics Data System (ADS)

    Sitler, Steven; Hill, Cody; Raja, Krishnan S.; Charit, Indrajit

    2016-06-01

    Transition metal borides are being considered for use as potential electrode coating materials in magnetohydrodynamic direct power extraction plants from coal-fired plasma. These electrode materials will be exposed to aggressive service conditions at high temperatures. Therefore, high-temperature oxidation resistance is an important property. Consolidated samples containing an equimolar solid solution of ZrB2-HfB2 with and without the addition of 1.8 mol pct LaB6 were prepared by ball milling of commercial boride material followed by spark plasma sintering. These samples were oxidized at 1773 K (1500 °C) in two different conditions: (1) as-sintered and (2) anodized (10 V in 0.1 M KOH electrolyte). Oxidation studies were carried out in 0.3 × 105 and 0.1 Pa oxygen partial pressures. The anodic oxide layers showed hafnium enrichment on the surface of the samples, whereas the high-temperature oxides showed zirconium enrichment. The anodized samples without LaB6 addition showed about 2.5 times higher oxidation resistance in high-oxygen partial pressures than the as-sintered samples. Addition of LaB6 improved the oxidation resistance in the as-sintered condition by about 30 pct in the high-oxygen partial pressure tests.

  15. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  16. A modeling and computer simulation approach to determine optimal lower extremity joint angular velocities based on a criterion to maximize individual muscle power.

    PubMed

    Hawkins, D

    1994-03-01

    A computer program was developed in conjunction with a musculoskeletal modeling scheme to determine lower extremity joint angular velocity profiles which allow specific muscles, if activated tetanically, to generate their greatest power. As input the program requires subject anthropometric and joint configuration data. Muscle-tendon (MT) attachment location data and a straight line MT model are used to calculate MT lengths for each joint configuration. The shortening velocity which allows an active muscle to generate its greatest power is calculated based on muscle architecture and a relationship between power and shortening velocity. A finite difference technique is used to calculate the time between sequential joint configurations which will produce the optimal muscle shortening velocity. This time is then used to calculate optimal joint angular velocities for each muscle and and for each joint configuration. The utility of this program is demonstrated by calculating optimal joint angular velocities for fifteen muscles and comparing calculated knee extension velocities with experimental results cited in the literature. PMID:8062553

  17. Computer-assisted assignment of functional domains in the nonstructural polyprotein of hepatitis E virus: delineation of an additional group of positive-strand RNA plant and animal viruses.

    PubMed

    Koonin, E V; Gorbalenya, A E; Purdy, M A; Rozanov, M N; Reyes, G R; Bradley, D W

    1992-09-01

    Computer-assisted comparison of the nonstructural polyprotein of hepatitis E virus (HEV) with proteins of other positive-strand RNA viruses allowed the identification of the following putative functional domains: (i) RNA-dependent RNA polymerase, (ii) RNA helicase, (iii) methyltransferase, (iv) a domain of unknown function ("X" domain) flanking the papain-like protease domains in the polyproteins of animal positive-strand RNA viruses, and (v) papain-like cysteine protease domain distantly related to the putative papain-like protease of rubella virus (RubV). Comparative analysis of the polymerase and helicase sequences of positive-strand RNA viruses belonging to the so-called "alpha-like" supergroup revealed grouping between HEV, RubV, and beet necrotic yellow vein virus (BNYVV), a plant furovirus. Two additional domains have been identified: one showed significant conservation between HEV, RubV, and BNYVV, and the other showed conservation specifically between HEV and RubV. The large nonstructural proteins of HEV, RubV, and BNYVV retained similar domain organization, with the exceptions of relocation of the putative protease domain in HEV as compared to RubV and the absence of the protease and X domains in BNYVV. These observations show that HEV, RubV, and BNYVV encompass partially conserved arrays of distinctive putative functional domains, suggesting that these viruses constitute a distinct monophyletic group within the alpha-like supergroup of positive-strand RNA viruses. PMID:1518855

  18. Application of computational neural networks in predicting atmospheric pollutant concentrations due to fossil-fired electric power generation

    SciTech Connect

    El-Hawary, F.

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net based system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  19. FERMI OBSERVATIONS OF GRB 090510: A SHORT-HARD GAMMA-RAY BURST WITH AN ADDITIONAL, HARD POWER-LAW COMPONENT FROM 10 keV TO GeV ENERGIES

    SciTech Connect

    Ackermann, M.; Bechtol, K.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Bouvier, A.; Asano, K.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Ballet, J.; Baring, M. G.; Bastieri, D.; Bhat, P. N.; Bissaldi, E.; Bonamente, E. E-mail: sylvain.guiriec@lpta.in2p3.f E-mail: ohno@astro.isas.jaxa.j

    2010-06-20

    We present detailed observations of the bright short-hard gamma-ray burst GRB 090510 made with the Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) on board the Fermi observatory. GRB 090510 is the first burst detected by the LAT that shows strong evidence for a deviation from a Band spectral fitting function during the prompt emission phase. The time-integrated spectrum is fit by the sum of a Band function with E{sub peak} = 3.9 {+-} 0.3 MeV, which is the highest yet measured, and a hard power-law component with photon index -1.62 {+-} 0.03 that dominates the emission below {approx}20 keV and above {approx}100 MeV. The onset of the high-energy spectral component appears to be delayed by {approx}0.1 s with respect to the onset of a component well fit with a single Band function. A faint GBM pulse and a LAT photon are detected 0.5 s before the main pulse. During the prompt phase, the LAT detected a photon with energy 30.5{sup +5.8}{sub -2.6} GeV, the highest ever measured from a short GRB. Observation of this photon sets a minimum bulk outflow Lorentz factor, {Gamma}{approx_gt} 1200, using simple {gamma}{gamma} opacity arguments for this GRB at redshift z = 0.903 and a variability timescale on the order of tens of ms for the {approx}100 keV-few MeV flux. Stricter high confidence estimates imply {Gamma} {approx_gt} 1000 and still require that the outflows powering short GRBs are at least as highly relativistic as those of long-duration GRBs. Implications of the temporal behavior and power-law shape of the additional component on synchrotron/synchrotron self-Compton, external-shock synchrotron, and hadronic models are considered.

  20. Fermi Observations of GRB 090510: A Short-Hard Gamma-ray Burst with an Additional, Hard Power-law Component from 10 keV TO GeV Energies

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Asano, K.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Baring, M. G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Bhat, P. N.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bouvier, A.; Bregeon, J.; Brez, A.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Carrigan, S.; Casandjian, J. M.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Dermer, C. D.; de Palma, F.; Dingus, B. L.; Silva, E. do Couto e.; Drell, P. S.; Dubois, R.; Dumora, D.; Farnier, C.; Favuzzi, C.; Fegan, S. J.; Finke, J.; Focke, W. B.; Frailis, M.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giglietto, N.; Giordano, F.; Glanzman, T.; Godfrey, G.; Granot, J.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Harding, A. K.; Hays, E.; Horan, D.; Hughes, R. E.; Jóhannesson, G.; Johnson, W. N.; Kamae, T.; Katagiri, H.; Kataoka, J.; Kawai, N.; Kippen, R. M.; Knödlseder, J.; Kocevski, D.; Kouveliotou, C.; Kuss, M.; Lande, J.; Latronico, L.; Lemoine-Goumard, M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Makeev, A.; Mazziotta, M. N.; McEnery, J. E.; McGlynn, S.; Meegan, C.; Mészáros, P.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monte, C.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakajima, H.; Nakamori, T.; Nolan, P. L.; Norris, J. P.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paciesas, W. S.; Paneque, D.; Panetta, J. H.; Parent, D.; Pelassa, V.; Pepe, M.; Pesce-Rollins, M.; Piron, F.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Ritz, S.; Rodriguez, A. Y.; Roth, M.; Ryde, F.; Sadrozinski, H. F.-W.; Sander, A.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Siskind, E. J.; Smith, P. D.; Spandre, G.; Spinelli, P.; Stamatikos, M.; Stecker, F. W.; Strickman, M. S.; Suson, D. J.; Tajima, H.; Takahashi, H.; Takahashi, T.; Tanaka, T.; Thayer, J. B.; Thayer, J. G.; Thompson, D. J.; Tibaldo, L.; Toma, K.; Torres, D. F.; Tosti, G.; Tramacere, A.; Uchiyama, Y.; Uehara, T.; Usher, T. L.; van der Horst, A. J.; Vasileiou, V.; Vilchez, N.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wang, P.; Wilson-Hodge, C.; Winer, B. L.; Wu, X. F.; Yamazaki, R.; Yang, Z.; Ylinen, T.; Ziegler, M.

    2010-06-01

    We present detailed observations of the bright short-hard gamma-ray burst GRB 090510 made with the Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) on board the Fermi observatory. GRB 090510 is the first burst detected by the LAT that shows strong evidence for a deviation from a Band spectral fitting function during the prompt emission phase. The time-integrated spectrum is fit by the sum of a Band function with E peak = 3.9 ± 0.3 MeV, which is the highest yet measured, and a hard power-law component with photon index -1.62 ± 0.03 that dominates the emission below ≈20 keV and above ≈100 MeV. The onset of the high-energy spectral component appears to be delayed by ~0.1 s with respect to the onset of a component well fit with a single Band function. A faint GBM pulse and a LAT photon are detected 0.5 s before the main pulse. During the prompt phase, the LAT detected a photon with energy 30.5+5.8 -2.6 GeV, the highest ever measured from a short GRB. Observation of this photon sets a minimum bulk outflow Lorentz factor, Γgsim 1200, using simple γγ opacity arguments for this GRB at redshift z = 0.903 and a variability timescale on the order of tens of ms for the ≈100 keV-few MeV flux. Stricter high confidence estimates imply Γ >~ 1000 and still require that the outflows powering short GRBs are at least as highly relativistic as those of long-duration GRBs. Implications of the temporal behavior and power-law shape of the additional component on synchrotron/synchrotron self-Compton, external-shock synchrotron, and hadronic models are considered.

  1. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

    SciTech Connect

    Katya Le Blanc; Johanna Oxstrand

    2012-05-01

    Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

  2. Computational Study of the Impact of Unsteadiness on the Aerodynamic Performance of a Variable- Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2012-01-01

    The design-point and off-design performance of an embedded 1.5-stage portion of a variable-speed power turbine (VSPT) was assessed using Reynolds-Averaged Navier-Stokes (RANS) analyses with mixing-planes and sector-periodic, unsteady RANS analyses. The VSPT provides one means by which to effect the nearly 50 percent main-rotor speed change required for the NASA Large Civil Tilt-Rotor (LCTR) application. The change in VSPT shaft-speed during the LCTR mission results in blade-row incidence angle changes of as high as 55 . Negative incidence levels of this magnitude at takeoff operation give rise to a vortical flow structure in the pressure-side cove of a high-turn rotor that transports low-momentum flow toward the casing endwall. The intent of the effort was to assess the impact of unsteadiness of blade-row interaction on the time-mean flow and, specifically, to identify potential departure from the predicted trend of efficiency with shaft-speed change of meanline and 3-D RANS/mixing-plane analyses used for design.

  3. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  4. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  5. Influence of signals length and noise in power spectral densities computation using Hilbert-Huang Transform in synthetic HRV

    NASA Astrophysics Data System (ADS)

    Rodríguez, María. G.; Altuve, Miguel; Lollett, Carlos; Wong, Sara

    2013-11-01

    Among non-invasive techniques, heart rate variability (HRV) analysis has become widely used for assessing the balance of the autonomic nervous system. Research in this area has not stopped and alternative tools for the study and interpretation of HRV, are still being proposed. Nevertheless, frequency-domain analysis of HRV is controversial when the heartbeat sequence is non-stationary. The Hilbert-Huang Transform (HHT) is a relative new technique for timefrequency analyses of non-linear and non-stationary signals. The main purpose of this work is to investigate the influence of time serieś length and noise in HRV from synthetic signals, using HHT and to compare it with Welch method. Synthetic heartbeat time series with different sizes and levels of signal to noise ratio (SNR) were investigated. Results shows i) sequencés length did not affect the estimation of HRV spectral parameter, ii) favorable performance for HHT for different SNR. Additionally, HHT can be applied to non-stationary signals from nonlinear systems and it will be useful to HRV analysis to interpret autonomic activity when acute and transient phenomena are assessed.

  6. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  7. Power management system

    DOEpatents

    Algrain, Marcelo C.; Johnson, Kris W.; Akasam, Sivaprasad; Hoff, Brian D.

    2007-10-02

    A method of managing power resources for an electrical system of a vehicle may include identifying enabled power sources from among a plurality of power sources in electrical communication with the electrical system and calculating a threshold power value for the enabled power sources. A total power load placed on the electrical system by one or more power consumers may be measured. If the total power load exceeds the threshold power value, then a determination may be made as to whether one or more additional power sources is available from among the plurality of power sources. At least one of the one or more additional power sources may be enabled, if available.

  8. Teaching Physics with Computers

    NASA Astrophysics Data System (ADS)

    Botet, R.; Trizac, E.

    2005-09-01

    Computers are now so common in our everyday life that it is difficult to imagine the computer-free scientific life of the years before the 1980s. And yet, in spite of an unquestionable rise, the use of computers in the realm of education is still in its infancy. This is not a problem with students: for the new generation, the pre-computer age seems as far in the past as the the age of the dinosaurs. It may instead be more a question of teacher attitude. Traditional education is based on centuries of polished concepts and equations, while computers require us to think differently about our method of teaching, and to revise the content accordingly. Our brains do not work in terms of numbers, but use abstract and visual concepts; hence, communication between computer and man boomed when computers escaped the world of numbers to reach a visual interface. From this time on, computers have generated new knowledge and, more importantly for teaching, new ways to grasp concepts. Therefore, just as real experiments were the starting point for theory, virtual experiments can be used to understand theoretical concepts. But there are important differences. Some of them are fundamental: a virtual experiment may allow for the exploration of length and time scales together with a level of microscopic complexity not directly accessible to conventional experiments. Others are practical: numerical experiments are completely safe, unlike some dangerous but essential laboratory experiments, and are often less expensive. Finally, some numerical approaches are suited only to teaching, as the concept necessary for the physical problem, or its solution, lies beyond the scope of traditional methods. For all these reasons, computers open physics courses to novel concepts, bringing education and research closer. In addition, and this is not a minor point, they respond naturally to the basic pedagogical needs of interactivity, feedback, and individualization of instruction. This is why one can

  9. Parallel Analysis and Visualization on Cray Compute Node Linux

    SciTech Connect

    Pugmire, Dave; Ahern, Sean

    2008-01-01

    Capability computer systems are deployed to give researchers the computational power required to investigate and solve key challenges facing the scientific community. As the power of these computer systems increases, the computational problem domain typically increases in size, complexity and scope. These increases strain the ability of commodity analysis and visualization clusters to effectively perform post-processing tasks and provide critical insight and understanding to the computed results. An alternative to purchasing increasingly larger, separate analysis and visualization commodity clusters is to use the computational system itself to perform post-processing tasks. In this paper, the recent successful port of VisIt, a parallel, open source analysis and visualization tool, to compute node linux running on the Cray is detailed. Additionally, the unprecedented ability of this resource for analysis and visualization is discussed and a report on obtained results is presented.

  10. Computation Directorate 2008 Annual Report

    SciTech Connect

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

  11. Performance and stability analysis of a photovoltaic power system

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Blaha, R. J.; Pickrell, R. L.

    1978-01-01

    The performance and stability characteristics of a 10 kVA photovoltaic power system are studied using linear Bode analysis and a nonlinear analog simulation. Power conversion efficiencies, system stability, and system transient performance results are given for system operation at various levels of solar insolation. Additionally, system operation and the modeling of system components for the purpose of computer simulation are described.

  12. Demographic inferences using short-read genomic data in an approximate Bayesian computation framework: in silico evaluation of power, biases and proof of concept in Atlantic walrus.

    PubMed

    Shafer, Aaron B A; Gattepaille, Lucie M; Stewart, Robert E A; Wolf, Jochen B W

    2015-01-01

    Approximate Bayesian computation (ABC) is a powerful tool for model-based inference of demographic histories from large genetic data sets. For most organisms, its implementation has been hampered by the lack of sufficient genetic data. Genotyping-by-sequencing (GBS) provides cheap genome-scale data to fill this gap, but its potential has not fully been exploited. Here, we explored power, precision and biases of a coalescent-based ABC approach where GBS data were modelled with either a population mutation parameter (θ) or a fixed site (FS) approach, allowing single or several segregating sites per locus. With simulated data ranging from 500 to 50 000 loci, a variety of demographic models could be reliably inferred across a range of timescales and migration scenarios. Posterior estimates were informative with 1000 loci for migration and split time in simple population divergence models. In more complex models, posterior distributions were wide and almost reverted to the uninformative prior even with 50 000 loci. ABC parameter estimates, however, were generally more accurate than an alternative composite-likelihood method. Bottleneck scenarios proved particularly difficult, and only recent bottlenecks without recovery could be reliably detected and dated. Notably, minor-allele-frequency filters - usual practice for GBS data - negatively affected nearly all estimates. With this in mind, we used a combination of FS and θ approaches on empirical GBS data generated from the Atlantic walrus (Odobenus rosmarus rosmarus), collectively providing support for a population split before the last glacial maximum followed by asymmetrical migration and a high Arctic bottleneck. Overall, this study evaluates the potential and limitations of GBS data in an ABC-coalescence framework and proposes a best-practice approach. PMID:25482153

  13. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... 33.10 Section 33.10 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT APPLICATIONS UNDER FEDERAL POWER ACT SECTION 203 § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his...

  14. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  15. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  16. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also

  17. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  18. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  19. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  20. Computational psychiatry.

    PubMed

    Wang, Xiao-Jing; Krystal, John H

    2014-11-01

    Psychiatric disorders such as autism and schizophrenia, arise from abnormalities in brain systems that underlie cognitive, emotional, and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically realistic modeling bridging cellular and synaptic mechanisms with behavior, and model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  1. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  2. 26 CFR 1.1250-2 - Additional depreciation defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... additional depreciation for the property is $1,123, as computed in the table below: Year Actual depreciation... January 1, 1970, the additional depreciation for the property is $567, as computed in the table below... computed in the table below: Year Actual depreciation Straight line Additional depreciation (deficit)...

  3. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. PMID:26078351

  4. A Generally Applicable Computer Algorithm Based on the Group Additivity Method for the Calculation of Seven Molecular Descriptors: Heat of Combustion, LogPO/W, LogS, Refractivity, Polarizability, Toxicity and LogBB of Organic Compounds; Scope and Limits of Applicability.

    PubMed

    Naef, Rudolf

    2015-01-01

    A generally applicable computer algorithm for the calculation of the seven molecular descriptors heat of combustion, logPoctanol/water, logS (water solubility), molar refractivity, molecular polarizability, aqueous toxicity (protozoan growth inhibition) and logBB (log (cblood/cbrain)) is presented. The method, an extendable form of the group-additivity method, is based on the complete break-down of the molecules into their constituting atoms and their immediate neighbourhood. The contribution of the resulting atom groups to the descriptor values is calculated using the Gauss-Seidel fitting method, based on experimental data gathered from literature. The plausibility of the method was tested for each descriptor by means of a k-fold cross-validation procedure demonstrating good to excellent predictive power for the former six descriptors and low reliability of logBB predictions. The goodness of fit (Q²) and the standard deviation of the 10-fold cross-validation calculation was >0.9999 and 25.2 kJ/mol, respectively, (based on N = 1965 test compounds) for the heat of combustion, 0.9451 and 0.51 (N = 2640) for logP, 0.8838 and 0.74 (N = 1419) for logS, 0.9987 and 0.74 (N = 4045) for the molar refractivity, 0.9897 and 0.77 (N = 308) for the molecular polarizability, 0.8404 and 0.42 (N = 810) for the toxicity and 0.4709 and 0.53 (N = 383) for logBB. The latter descriptor revealing a very low Q² for the test molecules (R² was 0.7068 and standard deviation 0.38 for N = 413 training molecules) is included as an example to show the limits of the group-additivity method. An eighth molecular descriptor, the heat of formation, was indirectly calculated from the heat of combustion data and correlated with published experimental heat of formation data with a correlation coefficient R² of 0.9974 (N = 2031). PMID:26457702

  5. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals.

    PubMed

    Chandran K S, Subhash; Mishra, Ashutosh; Shirhatti, Vinay; Ray, Supratim

    2016-03-23

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. PMID:27013668

  6. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals

    PubMed Central

    Chandran KS, Subhash; Mishra, Ashutosh; Shirhatti, Vinay

    2016-01-01

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. PMID:27013668

  7. A low power Multi-Channel Analyzer

    SciTech Connect

    Anderson, G.A.; Brackenbush, L.W.

    1993-06-01

    The instrumentation used in nuclear spectroscopy is generally large, is not portable, and requires a lot of power. Key components of these counting systems are the computer and the Multi-Channel Analyzer (MCA). To assist in performing measurements requiring portable systems, a small, very low power MCA has been developed at Pacific Northwest Laboratory (PNL). This MCA is interfaced with a Hewlett Packard palm top computer for portable applications. The MCA can also be connected to an IBM/PC for data storage and analysis. In addition, a real-time time display mode allows the user to view the spectra as they are collected.

  8. Computer viruses

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  9. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  10. 47 CFR 68.318 - Additional limitations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....708 of this chapter (47 CFR 64.708). ... TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Conditions for Terminal Equipment Approval § 68.318 Additional... activation. Note to paragraph (b)(1): Emergency alarm dialers and dialers under external computer control...

  11. 47 CFR 68.318 - Additional limitations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....708 of this chapter (47 CFR 64.708). ... TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Conditions for Terminal Equipment Approval § 68.318 Additional... activation. Note to paragraph (b)(1): Emergency alarm dialers and dialers under external computer control...

  12. 47 CFR 68.318 - Additional limitations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....708 of this chapter (47 CFR 64.708). ... TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Conditions for Terminal Equipment Approval § 68.318 Additional... activation. Note to paragraph (b)(1): Emergency alarm dialers and dialers under external computer control...

  13. 47 CFR 68.318 - Additional limitations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....708 of this chapter (47 CFR 64.708). ... TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Conditions for Terminal Equipment Approval § 68.318 Additional... activation. Note to paragraph (b)(1): Emergency alarm dialers and dialers under external computer control...

  14. 47 CFR 68.318 - Additional limitations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....708 of this chapter (47 CFR 64.708). ... TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Conditions for Terminal Equipment Approval § 68.318 Additional... activation. Note to paragraph (b)(1): Emergency alarm dialers and dialers under external computer control...

  15. Argonne's Laboratory computing center - 2007 annual report.

    SciTech Connect

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

  16. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  17. Computational capabilities of physical systems

    NASA Astrophysics Data System (ADS)

    Wolpert, David H.

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly ``processing information faster than the universe does.'' Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, ``prediction complexity,'' is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the ``encoding'' bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  18. [Food additives and healthiness].

    PubMed

    Heinonen, Marina

    2014-01-01

    Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects. PMID:24772784

  19. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. R.; St. Clair, T. L.; Burks, H. D.; Stoakley, D. M.

    1987-01-01

    A method has been found for enhancing the melt flow of thermoplastic polyimides during processing. A high molecular weight 422 copoly(amic acid) or copolyimide was fused with approximately 0.05 to 5 pct by weight of a low molecular weight amic acid or imide additive, and this melt was studied by capillary rheometry. Excellent flow and improved composite properties on graphite resulted from the addition of a PMDA-aniline additive to LARC-TPI. Solution viscosity studies imply that amic acid additives temporarily lower molecular weight and, hence, enlarge the processing window. Thus, compositions containing the additive have a lower melt viscosity for a longer time than those unmodified.

  20. Power system

    DOEpatents

    Hickam, Christopher Dale

    2008-03-18

    A power system includes a prime mover, a transmission, and a fluid coupler having a selectively engageable lockup clutch. The fluid coupler may be drivingly connected between the prime mover and the transmission. Additionally, the power system may include a motor/generator drivingly connected to at least one of the prime mover and the transmission. The power-system may also include power-system controls configured to execute a control method. The control method may include selecting one of a plurality of modes of operation of the power system. Additionally, the control method may include controlling the operating state of the lockup clutch dependent upon the mode of operation selected. The control method may also include controlling the operating state of the motor/generator dependent upon the mode of operation selected.

  1. Introduction to Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ekert, A.

    A computation is a physical process. It may be performed by a piece of electronics or on an abacus, or in your brain, but it is a process that takes place in nature and as such it is subject to the laws of physics. Quantum computers are machines that rely on characteristically quantum phenomena, such as quantum interference and quantum entanglement in order to perform computation. In this series of lectures I want to elaborate on the computational power of such machines.

  2. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  3. Additive usage levels.

    PubMed

    Langlais, R

    1996-01-01

    With the adoption of the European Parliament and Council Directives on sweeteners, colours and miscellaneous additives the Commission is now embarking on the project of coordinating the activities of the European Union Member States in the collection of the data that are to make up the report on food additive intake requested by the European Parliament. This presentation looks at the inventory of available sources on additive use levels and concludes that for the time being national legislation is still the best source of information considering that the directives have yet to be transposed into national legislation. Furthermore, this presentation covers the correlation of the food categories as found in the additives directives with those used by national consumption surveys and finds that in a number of instances this correlation still leaves a lot to be desired. The intake of additives via food ingestion and the intake of substances which are chemically identical to additives but which occur naturally in fruits and vegetables is found in a number of cases to be higher than the intake of additives added during the manufacture of foodstuffs. While the difficulties are recognized in contributing to the compilation of food additive intake data, industry as a whole, i.e. the food manufacturing and food additive manufacturing industries, are confident that in a concerted effort, use data on food additives by industry can be made available. Lastly, the paper points out that with the transportation of the additives directives into national legislation and the time by which the food industry will be able to make use of the new food legislative environment several years will still go by; food additives use data by the food industry will thus have to be reviewed at the beginning of the next century. PMID:8792135

  4. Fast algorithm for computing a primitive /2 to power p + 1/p-th root of unity in GF/q squared/

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1978-01-01

    A quick method is described for finding the primitive (2 to power p + 1)p-th root of unity in the Galois field GF(q squared), where q = (2 to power p) - 1 and is known as a Mersenne prime. Determination of this root is necessary to implement complex integer transforms of length (2 to power k) times p over the Galois field, with k varying between 3 and p + 1.

  5. Low-Power Public Key Cryptography

    SciTech Connect

    BEAVER,CHERYL L.; DRAELOS,TIMOTHY J.; HAMILTON,VICTORIA A.; SCHROEPPEL,RICHARD C.; GONZALES,RITA A.; MILLER,RUSSELL D.; THOMAS,EDWARD V.

    2000-11-01

    This report presents research on public key, digital signature algorithms for cryptographic authentication in low-powered, low-computation environments. We assessed algorithms for suitability based on their signature size, and computation and storage requirements. We evaluated a variety of general purpose and special purpose computing platforms to address issues such as memory, voltage requirements, and special functionality for low-powered applications. In addition, we examined custom design platforms. We found that a custom design offers the most flexibility and can be optimized for specific algorithms. Furthermore, the entire platform can exist on a single Application Specific Integrated Circuit (ASIC) or can be integrated with commercially available components to produce the desired computing platform.

  6. An additional middle cuneiform?

    PubMed Central

    Brookes-Fazakerley, S.D.; Jackson, G.E.; Platt, S.R.

    2015-01-01

    Additional cuneiform bones of the foot have been described in reference to the medial bipartite cuneiform or as small accessory ossicles. An additional middle cuneiform has not been previously documented. We present the case of a patient with an additional ossicle that has the appearance and location of an additional middle cuneiform. Recognizing such an anatomical anomaly is essential for ruling out second metatarsal base or middle cuneiform fractures and for the preoperative planning of arthrodesis or open reduction and internal fixation procedures in this anatomical location. PMID:26224890

  7. Carbamate deposit control additives

    SciTech Connect

    Honnen, L.R.; Lewis, R.A.

    1980-11-25

    Deposit control additives for internal combustion engines are provided which maintain cleanliness of intake systems without contributing to combustion chamber deposits. The additives are poly(oxyalkylene) carbamates comprising a hydrocarbyloxyterminated poly(Oxyalkylene) chain of 2-5 carbon oxyalkylene units bonded through an oxycarbonyl group to a nitrogen atom of ethylenediamine.

  8. Analysis of large power systems

    NASA Technical Reports Server (NTRS)

    Dommel, H. W.

    1975-01-01

    Computer-oriented power systems analysis procedures in the electric utilities are surveyed. The growth of electric power systems is discussed along with the solution of sparse network equations, power flow, and stability studies.

  9. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  10. Impact of Classroom Computer Use on Computer Anxiety.

    ERIC Educational Resources Information Center

    Lambert, Matthew E.; And Others

    Increasing use of computer programs for undergraduate psychology education has raised concern over the impact of computer anxiety on educational performance. Additionally, some researchers have indicated that classroom computer use can exacerbate pre-existing computer anxiety. To evaluate the relationship between in-class computer use and computer…

  11. [The Computing Teacher. Selected Articles on Computer Literacy.

    ERIC Educational Resources Information Center

    Moursund, David; And Others

    1985-01-01

    This document consists of a compilation of nine articles, on computer literacy, that have been extracted from the 1984-1985 issues of the journal "The Computing Teacher". The articles include: (1) "ICLEP (Individual Computer Literacy Education Plan): A Powerful Idea" (David Moursund); (2) "Computers, Kids, and Values" (Stephen J. Taffee); (3)…

  12. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  13. Multichannel Phase and Power Detector

    NASA Technical Reports Server (NTRS)

    Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy

    2006-01-01

    An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals

  14. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  15. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  16. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  17. Transmetalation from B to Rh in the course of the catalytic asymmetric 1,4-addition reaction of phenylboronic acid to enones: a computational comparison of diphosphane and diene ligands.

    PubMed

    Li, You-Gui; He, Gang; Qin, Hua-Li; Kantchev, Eric Assen B

    2015-02-14

    Transmetalation is a key elementary reaction of many important catalytic reactions. Among these, 1,4-addition of arylboronic acids to organic acceptors such as α,β-unsaturated ketones has emerged as one of the most important methods for asymmetric C-C bond formation. A key intermediate for the B-to-Rh transfer arising from quaternization on a boronic acid by a Rh-bound hydroxide (the active catalyst) has been proposed. Herein, DFT calculations (IEFPCM/PBE0/DGDZVP level of theory) establish the viability of this proposal, and characterize the associated pathways. The delivery of phenylboronic acid in the orientation suited for the B-to-Rh transfer from the very beginning is energetically preferable, and occurs with expulsion of Rh-coordinated water molecules. For the bulkier binap ligand, the barriers are higher (particularly for the phenylboronic acid activation step) due to a less favourable entropy term to the free energy, in accordance with the experimentally observed slower transmetalation rate. PMID:25422851

  18. 26 CFR 1.1250-2 - Additional depreciation defined.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... is $1,123, as computed in the table below: Year Actual depreciation Straight line Additional... depreciation for the property is $567, as computed in the table below: Years Depreciation Straight line... additional depreciation for the property is $29,000, as computed in the table below: Year Actual...

  19. 26 CFR 1.1250-2 - Additional depreciation defined.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... is $1,123, as computed in the table below: Year Actual depreciation Straight line Additional... depreciation for the property is $567, as computed in the table below: Years Depreciation Straight line... additional depreciation for the property is $29,000, as computed in the table below: Year Actual...

  20. 26 CFR 1.1250-2 - Additional depreciation defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... is $1,123, as computed in the table below: Year Actual depreciation Straight line Additional... depreciation for the property is $567, as computed in the table below: Years Depreciation Straight line... additional depreciation for the property is $29,000, as computed in the table below: Year Actual...

  1. Additional Security Considerations for Grid Management

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  2. Smog control fuel additives

    SciTech Connect

    Lundby, W.

    1993-06-29

    A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.

  3. Food Additives and Hyperkinesis

    ERIC Educational Resources Information Center

    Wender, Ester H.

    1977-01-01

    The hypothesis that food additives are causally associated with hyperkinesis and learning disabilities in children is reviewed, and available data are summarized. Available from: American Medical Association 535 North Dearborn Street Chicago, Illinois 60610. (JG)

  4. Additional Types of Neuropathy

    MedlinePlus

    ... A A Listen En Español Additional Types of Neuropathy Charcot's Joint Charcot's Joint, also called neuropathic arthropathy, ... can stop bone destruction and aid healing. Cranial Neuropathy Cranial neuropathy affects the 12 pairs of nerves ...

  5. Multi-heat addition turbine engine

    NASA Technical Reports Server (NTRS)

    Franciscus, Leo C. (Inventor); Brabbs, Theodore A. (Inventor)

    1993-01-01

    A multi-heat addition turbine engine (MHATE) incorporates a plurality of heat addition devices to transfer energy to air and a plurality of turbines to extract energy from the air while converting it to work. The MHATE provides dry power and lower fuel consumption or lower combustor exit temperatures.

  6. NOAA OTEC CWP (National Oceanic and Atmospheric Administration Ocean Thermal Energy Conversion Cold Water Pipe) at-sea test. Volume 3: Additional tabulation of the power spectra, part 2

    NASA Astrophysics Data System (ADS)

    1983-12-01

    Data collected during the Ocean Thermal Energy Conversion (OTEC) Cold Water Pipe At Sea Test are analyzed. Also included are the following ittems: (1) sensor factors and offsets, and the data processing algorithms used to convert the recorded sensor measurements from electrical to engineering units; (2) plots of the power spectra estimates obtained from a fast fourier transform (FFT) analysis of selected channels; (3) plots of selected sensor measurements as a function of time; and (4) plots of bending strain along the pipe using statistics and values presented.

  7. Computers and Computer Resources.

    ERIC Educational Resources Information Center

    Bitter, Gary

    1980-01-01

    This resource directory provides brief evaluative descriptions of six popular home computers and lists selected sources of educational software, computer books, and magazines. For a related article on microcomputers in the schools, see p53-58 of this journal issue. (SJL)

  8. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    SciTech Connect

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  9. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell

    2014-01-01

    Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.

  10. Computational study of the transition state for H[sub 2] addition to Vaska-type complexes (trans-Ir(L)[sub 2](CO)X). Substituent effects on the energy barrier and the origin of the small H[sub 2]/D[sub 2] kinetic isotope effect

    SciTech Connect

    Abu-Hasanayn, F.; Goldman, A.S.; Krogh-Jespersen, K. )

    1993-06-03

    Ab initio molecular orbital methods have been used to study transition state properties for the concerted addition reaction of H[sub 2] to Vaska-type complexes, trans-Ir(L)[sub 2](CO)X, 1 (L = PH[sub 3] and X = F, Cl, Br, I, CN, or H; L = NH[sub 3] and X = Cl). Stationary points on the reaction path retaining the trans-L[sub 2] arrangement were located at the Hartree-Fock level using relativistic effective core potentials and valence basis sets of double-[zeta] quality. The identities of the stationary points were confirmed by normal mode analysis. Activation energy barriers were calculated with electron correlation effects included via Moller-Plesset perturbation theory carried fully through fourth order, MP4(SDTQ). The more reactive complexes feature structurally earlier transition states and larger reaction exothermicities, in accord with the Hammond postulate. The experimentally observed increase in reactivity of Ir(PPh[sub 3])[sub 2](CO)X complexes toward H[sub 2] addition upon going from X = F to X = I is reproduced well by the calculations and is interpreted to be a consequence of diminished halide-to-Ir [pi]-donation by the heavier halogens. Computed activation barriers (L = PH[sub 3]) range from 6.1 kcal/mol (X = H) to 21.4 kcal/mol (X = F). Replacing PH[sub 3] by NH[sub 3] when X = Cl increases the barrier from 14.1 to 19.9 kcal/mol. Using conventional transition state theory, the kinetic isotope effects for H[sub 2]/D[sub 2] addition are computed to lie between 1.1 and 1.7 with larger values corresponding to earlier transition states. Judging from the computational data presented here, tunneling appears to be unimportant for H[sub 2] addition to these iridium complexes. 51 refs., 4 tabs.

  11. Phenylethynyl Containing Reactive Additives

    NASA Technical Reports Server (NTRS)

    Connell, John W. (Inventor); Smith, Joseph G., Jr. (Inventor); Hergenrother, Paul M. (Inventor)

    2002-01-01

    Phenylethynyl containing reactive additives were prepared from aromatic diamine, containing phenylethvnvl groups and various ratios of phthalic anhydride and 4-phenylethynviphthalic anhydride in glacial acetic acid to form the imide in one step or in N-methyl-2-pvrrolidinone to form the amide acid intermediate. The reactive additives were mixed in various amounts (10% to 90%) with oligomers containing either terminal or pendent phenylethynyl groups (or both) to reduce the melt viscosity and thereby enhance processability. Upon thermal cure, the additives react and become chemically incorporated into the matrix and effect an increase in crosslink density relative to that of the host resin. This resultant increase in crosslink density has advantageous consequences on the cured resin properties such as higher glass transition temperature and higher modulus as compared to that of the host resin.

  12. Thermal Hydraulic Computer Code System.

    1999-07-16

    Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

  13. Additives in plastics.

    PubMed Central

    Deanin, R D

    1975-01-01

    The polymers used in plastics are generally harmless. However, they are rarely used in pure form. In almost all commercial plastics, they are "compounded" with monomeric ingredients to improve their processing and end-use performance. In order of total volume used, these monomeric additives may be classified as follows: reinforcing fibers, fillers, and coupling agents; plasticizers; colorants; stabilizers (halogen stabilizers, antioxidants, ultraviolet absorbers, and biological preservatives); processing aids (lubricants, others, and flow controls); flame retardants, peroxides; and antistats. Some information is already available, and much more is needed, on potential toxicity and safe handling of these additives during processing and manufacture of plastics products. PMID:1175566

  14. Computing in Research.

    ERIC Educational Resources Information Center

    Ashenhurst, Robert L.

    The introduction and diffusion of automatic computing facilities during the 1960's is reviewed; it is described as a time when research strategies in a broad variety of disciplines changed to take advantage of the newfound power provided by the computer. Several types of typical problems encountered by researchers who adopted the new technologies,…

  15. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  16. Experimental study of matrix carbon field-emission cathodes and computer aided design of electron guns for microwave power devices, exploring these cathodes

    SciTech Connect

    Grigoriev, Y.A.; Petrosyan, A.I.; Penzyakov, V.V.; Pimenov, V.G.; Rogovin, V.I.; Shesterkin, V.I.; Kudryashov, V.P.; Semyonov, V.C.

    1997-03-01

    The experimental study of matrix carbon field-emission cathodes (MCFECs), which has led to the stable operation of the cathodes with current emission values up to 100 mA, is described. A method of computer aided design of TWT electron guns (EGs) with MCFEC, based on the results of the MCFEC emission experimental study, is presented. The experimental MCFEC emission characteristics are used to define the field gain coefficient K and the cathode effective emission area S{sub eff}. The EG program computes the electric field upon the MCFEC surface, multiplies it by the K value and uses the Fowler{endash}Nordheim law and the S{sub eff} value to calculate the MCFEC current; the electron trajectories are computed as well. {copyright} {ital 1997 American Vacuum Society.}

  17. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  18. Biobased lubricant additives

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...

  19. Multifunctional fuel additives

    SciTech Connect

    Baillargeon, D.J.; Cardis, A.B.; Heck, D.B.

    1991-03-26

    This paper discusses a composition comprising a major amount of a liquid hydrocarbyl fuel and a minor low-temperature flow properties improving amount of an additive product of the reaction of a suitable diol and product of a benzophenone tetracarboxylic dianhydride and a long-chain hydrocarbyl aminoalcohol.

  20. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    SciTech Connect

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).