Science.gov

Sample records for additional computing power

  1. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  2. Powered Tate Pairing Computation

    NASA Astrophysics Data System (ADS)

    Kang, Bo Gyeong; Park, Je Hong

    In this letter, we provide a simple proof of bilinearity for the eta pairing. Based on it, we show an efficient method to compute the powered Tate pairing as well. Although efficiency of our method is equivalent to that of the Tate pairing on the eta pairing approach, but ours is more general in principle.

  3. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  4. Power consumption monitoring using additional monitoring device

    SciTech Connect

    Truşcă, M. R. C. Albert, Ş. Tudoran, C. Soran, M. L. Fărcaş, F.; Abrudean, M.

    2013-11-13

    Today, emphasis is placed on reducing power consumption. Computers are large consumers; therefore it is important to know the total consumption of computing systems. Since their optimal functioning requires quite strict environmental conditions, without much variation in temperature and humidity, reducing energy consumption cannot be made without monitoring environmental parameters. Thus, the present work uses a multifunctional electric meter UPT 210 for power consumption monitoring. Two applications were developed: software which carries meter readings provided by electronic and programming facilitates remote device and a device for temperature monitoring and control. Following temperature variations that occur both in the cooling system, as well as the ambient, can reduce energy consumption. For this purpose, some air conditioning units or some computers are stopped in different time slots. These intervals were set so that the economy is high, but the work's Datacenter is not disturbed.

  5. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  6. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  7. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.

  8. Computed tomography characterisation of additive manufacturing materials.

    PubMed

    Bibb, Richard; Thompson, Darren; Winder, John

    2011-06-01

    Additive manufacturing, covering processes frequently referred to as rapid prototyping and rapid manufacturing, provides new opportunities in the manufacture of highly complex and custom-fitting medical devices and products. Whilst many medical applications of AM have been explored and physical properties of the resulting parts have been studied, the characterisation of AM materials in computed tomography has not been explored. The aim of this study was to determine the CT number of commonly used AM materials. There are many potential applications of the information resulting from this study in the design and manufacture of wearable medical devices, implants, prostheses and medical imaging test phantoms. A selection of 19 AM material samples were CT scanned and the resultant images analysed to ascertain the materials' CT number and appearance in the images. It was found that some AM materials have CT numbers very similar to human tissues, FDM, SLA and SLS produce samples that appear uniform on CT images and that 3D printed materials show a variation in internal structure.

  9. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  10. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  11. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  12. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  13. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  14. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  15. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  16. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  17. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  18. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  19. Children, Computers, and Powerful Ideas

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Today it is commonplace that computers and technology permeate almost every aspect of education. In the late 1960s, though, the idea that computers could serve as a catalyst for thinking about the way children learn was a radical concept. In the early 1960s, Seymour Papert joined the faculty of MIT and founded the Artificial Intelligence Lab with…

  20. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  1. Exploring human inactivity in computer power consumption

    NASA Astrophysics Data System (ADS)

    Candrawati, Ria; Hashim, Nor Laily Binti

    2016-08-01

    Managing computer power consumption has become an important challenge in computer society and this is consistent with a trend where a computer system is more important to modern life together with a request for increased computing power and functions continuously. Unfortunately, previous approaches are still inadequately designed to handle the power consumption problem due to unpredictable workload of a system caused by unpredictable human behaviors. This is happens due to lack of knowledge in a software system and the software self-adaptation is one approach in dealing with this source of uncertainty. Human inactivity is handled by adapting the behavioral changes of the users. This paper observes human inactivity in the computer usage and finds that computer power usage can be reduced if the idle period can be intelligently sensed from the user activities. This study introduces Control, Learn and Knowledge model that adapts the Monitor, Analyze, Planning, Execute control loop integrates with Q Learning algorithm to learn human inactivity period to minimize the computer power consumption. An experiment to evaluate this model was conducted using three case studies with same activities. The result show that the proposed model obtained those 5 out of 12 activities shows the power decreasing compared to others.

  2. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  3. Computing Efficiency Of Transfer Of Microwave Power

    NASA Technical Reports Server (NTRS)

    Pinero, L. R.; Acosta, R.

    1995-01-01

    BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.

  4. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  5. Power of one qumode for quantum computation

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Thompson, Jayne; Weedbrook, Christian; Lloyd, Seth; Vedral, Vlatko; Gu, Mile; Modi, Kavan

    2016-05-01

    Although quantum computers are capable of solving problems like factoring exponentially faster than the best-known classical algorithms, determining the resources responsible for their computational power remains unclear. An important class of problems where quantum computers possess an advantage is phase estimation, which includes applications like factoring. We introduce a computational model based on a single squeezed state resource that can perform phase estimation, which we call the power of one qumode. This model is inspired by an interesting computational model known as deterministic quantum computing with one quantum bit (DQC1). Using the power of one qumode, we identify that the amount of squeezing is sufficient to quantify the resource requirements of different computational problems based on phase estimation. In particular, we can use the amount of squeezing to quantitatively relate the resource requirements of DQC1 and factoring. Furthermore, we can connect the squeezing to other known resources like precision, energy, qudit dimensionality, and qubit number. We show the circumstances under which they can likewise be considered good resources.

  6. Software Support for Transiently Powered Computers

    SciTech Connect

    Van Der Woude, Joel Matthew

    2015-06-01

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate the correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.

  7. Computational Challenges for Power System Operation

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Liu, Yan; Rice, Mark J.; Jin, Shuangshuang

    2012-02-06

    As the power grid technology evolution and information technology revolution converge, power grids are witnessing a revolutionary transition, represented by emerging grid technologies and large scale deployment of new sensors and meters in networks. This transition brings opportunities, as well as computational challenges in the field of power grid analysis and operation. This paper presents some research outcomes in the areas of parallel state estimation using the preconditioned conjugated gradient method, parallel contingency analysis with a dynamic load balancing scheme and distributed system architecture. Based on this research, three types of computational challenges are identified: highly coupled applications, loosely coupled applications, and centralized and distributed applications. Recommendations for future work for power grid applications are also presented.

  8. Reducing power consumption during execution of an application on a plurality of compute nodes

    SciTech Connect

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  9. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  10. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  11. 4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. Also includes plot plan at 1 inch to 100 feet. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 3. Plan no. 10,548. Scale 1/4 inch and h inch to the foot. April 30, 1945, last revised 6/22/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  12. 3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 4. Plan no. 10,548. Scale 1/4 inch to the foot, elevations, and one inch to the foot, sections and details. April 30, 1945, last revised 6/19/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  13. Lithium Dinitramide as an Additive in Lithium Power Cells

    NASA Technical Reports Server (NTRS)

    Gorkovenko, Alexander A.

    2007-01-01

    Lithium dinitramide, LiN(NO2)2 has shown promise as an additive to nonaqueous electrolytes in rechargeable and non-rechargeable lithium-ion-based electrochemical power cells. Such non-aqueous electrolytes consist of lithium salts dissolved in mixtures of organic ethers, esters, carbonates, or acetals. The benefits of adding lithium dinitramide (which is also a lithium salt) include lower irreversible loss of capacity on the first charge/discharge cycle, higher cycle life, lower self-discharge, greater flexibility in selection of electrolyte solvents, and greater charge capacity. The need for a suitable electrolyte additive arises as follows: The metallic lithium in the anode of a lithium-ion-based power cell is so highly reactive that in addition to the desired main electrochemical reaction, it engages in side reactions that cause formation of resistive films and dendrites, which degrade performance as quantified in terms of charge capacity, cycle life, shelf life, first-cycle irreversible capacity loss, specific power, and specific energy. The incidence of side reactions can be reduced through the formation of a solid-electrolyte interface (SEI) a thin film that prevents direct contact between the lithium anode material and the electrolyte. Ideally, an SEI should chemically protect the anode and the electrolyte from each other while exhibiting high conductivity for lithium ions and little or no conductivity for electrons. A suitable additive can act as an SEI promoter. Heretofore, most SEI promotion was thought to derive from organic molecules in electrolyte solutions. In contrast, lithium dinitramide is inorganic. Dinitramide compounds are known as oxidizers in rocket-fuel chemistry and until now, were not known as SEI promoters in battery chemistry. Although the exact reason for the improvement afforded by the addition of lithium dinitramide is not clear, it has been hypothesized that lithium dinitramide competes with other electrolyte constituents to react with

  14. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  15. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  16. Computing GIC in large power systems

    SciTech Connect

    Prabhakara, F.S. ); Ponder, J.Z.; Towle, J.N.

    1992-01-01

    On March 13, 1989, a severe geomagnetic disturbance affected power and communications systems in the North American continent. Since the geomagnetic disturbance, several other disturbances have occurred. The Pennsylvania, New Jersey, and Maryland (PJM) Interconnection system, its member companies, and some of the neighboring utilities experienced the geomagnetic induced current (GIC) effects on March 13, 1989, as well as during the subsequent geomagnetic disturbances. As a result, considerable effort is being focused on measurement, analysis, and mitigation of GIC in the PJM system. Some of the analytical and computational work completed so far is summarized in this article.

  17. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  18. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  19. Computational calculation of equilibrium constants: addition to carbonyl compounds.

    PubMed

    Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio

    2009-10-22

    Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within +/- 0.5 log K(hyd) units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than +/- 1.0 log K(hyd), on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary. PMID:19761202

  20. Computational Calculation of Equilibrium Constants: Addition to Carbonyl Compounds

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio

    2009-09-01

    Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within ± 0.5 log Khyd units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than ± 1.0 log Khyd, on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary.

  1. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  2. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-01-25

    This is the eighth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two coal types and two gasifier types. Good agreement with DOE computed values has been obtained for the Vision 21 configuration under ''baseline'' conditions. Additional model verification has been performed for the flowing slag model that has been implemented into the CFD based gasifier model. Comparisons for the slag, wall and syngas conditions predicted by our model versus values from predictive models that have been published by other researchers show good agreement. The software infrastructure of the Vision 21 workbench has been modified to use a recently released, upgraded version of SCIRun.

  3. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  4. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  5. IBM Cloud Computing Powering a Smarter Planet

    NASA Astrophysics Data System (ADS)

    Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu

    With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.

  6. The Power of Language in Computer-Mediated Groups.

    ERIC Educational Resources Information Center

    Adkins, Mark; Brashers, Dale E.

    1995-01-01

    Discusses an experiment to find the effects of "powerful" and "powerless" language on small computer-mediated groups. Explains that subjects were asked to communicate via computer in a decision-making context. Describes the three conditions. Finds that language style has significant impact on impression formation in computer groups and that…

  7. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  8. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  9. 1995 IEEE power industry computer applications conference: Proceedings

    SciTech Connect

    1995-05-12

    This is a collection of papers presented at the Power Industry Computer Applications Conference of 1995 in Salt Lake City, Utah. The topics of the papers include new control center functions in a deregulated environment, maintaining secure voltage profiles in power transmission systems, parallel processing applications, optimal power flow, graphical user interface, hydroelectric power scheduling, control and modeling, power distribution, transmission network security assessment, transmission planning, artificial intelligence applications, energy management systems migration, network analysis, power system protection, energy management systems design, software engineering, and planning of power systems.

  10. System and method for high power diode based additive manufacturing

    DOEpatents

    El-Dasher, Bassem S.; Bayramian, Andrew; Demuth, James A.; Farmer, Joseph C.; Torres, Sharon G.

    2016-04-12

    A system is disclosed for performing an Additive Manufacturing (AM) fabrication process on a powdered material forming a substrate. The system may make use of a diode array for generating an optical signal sufficient to melt a powdered material of the substrate. A mask may be used for preventing a first predetermined portion of the optical signal from reaching the substrate, while allowing a second predetermined portion to reach the substrate. At least one processor may be used for controlling an output of the diode array.

  11. On source radiation. [power output computation

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1980-01-01

    The power output from given sources is usually ascertained via an energy flux integral over the normal directions to a remote (farfield) surface; an alternative procedure, which utilizes an integral that specifies the direct rate of working by the source on the resultant field, is described and illustrated for both point and continuous source distributions. A comparison between the respective procedures is made in the analysis of sound radiated from a periodic dipole source whose axis rotates in a plane, on a full or partial angular range, with prescribed frequency. Thus, adopting a conventional approach, Sretenskii (1956) characterizes the rotating dipole in terms of an infinite number of stationary ones along a pair of orthogonal directions in the plane and, through the farfield representation of the latter, arrives at a series development for the instantaneous radiated power, whereas the local manner of power calculation dispenses with the equivalent infinite aggregate of sources and yields a compact analytical result.

  12. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  13. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  14. DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS (REAR), ROOM 8A - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  15. Controlling High Power Devices with Computers or TTL Logic Circuits

    ERIC Educational Resources Information Center

    Carlton, Kevin

    2002-01-01

    Computers are routinely used to control experiments in modern science laboratories. This should be reflected in laboratories in an educational setting. There is a mismatch between the power that can be delivered by a computer interfacing card or a TTL logic circuit and that required by many practical pieces of laboratory equipment. One common way…

  16. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  17. The Utility of Computer-Assisted Power Analysis Lab Instruction

    ERIC Educational Resources Information Center

    Petrocelli, John V.

    2007-01-01

    Undergraduate students (N = 47), enrolled in 2 separate psychology research methods classes, evaluated a power analysis lab demonstration and homework assignment. Students attended 1 of 2 lectures that included a basic introduction to power analysis and sample size analysis. One lecture included a demonstration of how to use a computer-based power…

  18. Transitions in the computational power of thermal states for measurement-based quantum computation

    SciTech Connect

    Barrett, Sean D.; Bartlett, Stephen D.; Jennings, David; Doherty, Andrew C.; Rudolph, Terry

    2009-12-15

    We show that the usefulness of the thermal state of a specific spin-lattice model for measurement-based quantum computing exhibits a transition between two distinct 'phases' - one in which every state is a universal resource for quantum computation, and another in which any local measurement sequence can be simulated efficiently on a classical computer. Remarkably, this transition in computational power does not coincide with any phase transition, classical, or quantum in the underlying spin-lattice model.

  19. A Computational Workbench Environment For Virtual Power Plant Simulation

    SciTech Connect

    Bockelie, Michael J.; Swensen, David A.; Denison, Martin K.; Sarofim, Adel F.

    2001-11-06

    In this paper we describe our progress toward creating a computational workbench for performing virtual simulations of Vision 21 power plants. The workbench provides a framework for incorporating a full complement of models, ranging from simple heat/mass balance reactor models that run in minutes to detailed models that can require several hours to execute. The workbench is being developed using the SCIRun software system. To leverage a broad range of visualization tools the OpenDX visualization package has been interfaced to the workbench. In Year One our efforts have focused on developing a prototype workbench for a conventional pulverized coal fired power plant. The prototype workbench uses a CFD model for the radiant furnace box and reactor models for downstream equipment. In Year Two and Year Three, the focus of the project will be on creating models for gasifier based systems and implementing these models into an improved workbench. In this paper we describe our work effort for Year One and outline our plans for future work. We discuss the models included in the prototype workbench and the software design issues that have been addressed to incorporate such a diverse range of models into a single software environment. In addition, we highlight our plans for developing the energyplex based workbench that will be developed in Year Two and Year Three.

  20. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

    2004-01-28

    This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

  1. Vector computer implementation of power flow outage studies

    SciTech Connect

    Granelli, G.P.; Montagna, M.; Pasini, G.L. ); Marannino, P. )

    1992-05-01

    This paper presents an application of vector and parallel processing to power flow outage studies on large-scale networks. Standard sparsity programming is not well suited to the capabilities of vector and parallel computers because of the extremely short vectors processed in load flow studies. In order to improve computation efficiency, the operations required to perform both forward/backward solution and power residual calculation are gathered in the form of long FORTRAN DO loops. Two algorithms are proposed and compared with the results of a program written for scalar processing. Simulations for the outage studies on IEEE standard networks and some different configurations of the Italian and European (UCPTE) EHV systems are run on a CRAY Y-MP8/432 vector computer (and partially on a IBM 3090/200S VF). The multitasking facility of the CRAY computer is also exploited in order to shorten the wall clock time required by a complete outage simulation.

  2. Future computing platforms for science in a power constrained era

    DOE PAGES

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less

  3. Future computing platforms for science in a power constrained era

    SciTech Connect

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  4. Future Computing Platforms for Science in a Power Constrained Era

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  5. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve today’s grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  6. Jaguar: The World?s Most Powerful Computer

    SciTech Connect

    Bland, Arthur S Buddy; Rogers, James H; Kendall, Ricky A; Kothe, Douglas B; Shipman, Galen M

    2009-01-01

    The Cray XT system at ORNL is the world s most powerful computer with several applications exceeding one-petaflops performance. This paper describes the architecture of Jaguar with combined XT4 and XT5 nodes along with an external Lustre file system and external login nodes. We also present some early results from Jaguar.

  7. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  8. Flash on disk for low-power multimedia computing

    NASA Astrophysics Data System (ADS)

    Singleton, Leo; Nathuji, Ripal; Schwan, Karsten

    2007-01-01

    Mobile multimedia computers require large amounts of data storage, yet must consume low power in order to prolong battery life. Solid-state storage offers low power consumption, but its capacity is an order of magnitude smaller than the hard disks needed for high-resolution photos and digital video. In order to create a device with the space of a hard drive, yet the low power consumption of solid-state storage, hardware manufacturers have proposed using flash memory as a write buffer on mobile systems. This paper evaluates the power savings of such an approach and also considers other possible flash allocation algorithms, using both hardware- and software-level flash management. Its contributions also include a set of typical multimedia-rich workloads for mobile systems and power models based upon current disk and flash technology. Based on these workloads, we demonstrate an average power savings of 267 mW (53% of disk power) using hardware-only approaches. Next, we propose another algorithm, termed Energy-efficient Virtual Storage using Application-Level Framing (EVS-ALF), which uses both hardware and software for power management. By collecting information from the applications and using this metadata to perform intelligent flash allocation and prefetching, EVS-ALF achieves an average power savings of 307 mW (61%), another 8% improvement over hardware-only techniques.

  9. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  10. Utilizing a Collaborative Cross Number Puzzle Game to Develop the Computing Ability of Addition and Subtraction

    ERIC Educational Resources Information Center

    Chen, Yen-Hua; Looi, Chee-Kit; Lin, Chiu-Pin; Shao, Yin-Juan; Chan, Tak-Wai

    2012-01-01

    While addition and subtraction is a key mathematical skill for young children, a typical activity for them in classrooms involves doing repetitive arithmetic calculation exercises. In this study, we explore a collaborative way for students to learn these skills in a technology-enabled way with wireless computers. Two classes, comprising a total of…

  11. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  12. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  13. Value of Faster Computation for Power Grid Operation

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Elizondo, Marcelo A.

    2012-09-30

    As a result of the grid evolution meeting the information revolution, the power grid is becoming far more complex than it used to be. How to feed data in, perform analysis, and extract information in a real-time manner is a fundamental challenge in today’s power grid operation, not to mention the significantly increased complexity in the smart grid environment. Therefore, high performance computing (HPC) becomes one of the advanced technologies used to meet the requirement of real-time operation. This paper presents benefit case studies to show the value of fast computation for operation. Two fundamental operation functions, state estimation (SE) and contingency analysis (CA), are used as examples. In contrast with today’s tools, fast SE can estimate system status in a few seconds—comparable to measurement cycles. Fast CA can solve more contingencies in a shorter period, reducing the possibility of missing critical contingencies. The benefit case study results clearly show the value of faster computation for increasing the reliability and efficiency of power system operation.

  14. Rotating Detonation Combustion: A Computational Study for Stationary Power Generation

    NASA Astrophysics Data System (ADS)

    Escobar, Sergio

    The increased availability of gaseous fossil fuels in The US has led to the substantial growth of stationary Gas Turbine (GT) usage for electrical power generation. In fact, from 2013 to 2104, out of the 11 Tera Watts-hour per day produced from fossil fuels, approximately 27% was generated through the combustion of natural gas in stationary GT. The thermodynamic efficiency for simple-cycle GT has increased from 20% to 40% during the last six decades, mainly due to research and development in the fields of combustion science, material science and machine design. However, additional improvements have become more costly and more difficult to obtain as technology is further refined. An alternative to improve GT thermal efficiency is the implementation of a combustion regime leading to pressure-gain; rather than pressure loss across the combustor. One concept being considered for such purpose is Rotating Detonation Combustion (RDC). RDC refers to a combustion regime in which a detonation wave propagates continuously in the azimuthal direction of a cylindrical annular chamber. In RDC, the fuel and oxidizer, injected from separated streams, are mixed near the injection plane and are then consumed by the detonation front traveling inside the annular gap of the combustion chamber. The detonation products then expand in the azimuthal and axial direction away from the detonation front and exit through the combustion chamber outlet. In the present study Computational Fluid Dynamics (CFD) is used to predict the performance of Rotating Detonation Combustion (RDC) at operating conditions relevant to GT applications. As part of this study, a modeling strategy for RDC simulations was developed. The validation of the model was performed using benchmark cases with different levels of complexity. First, 2D simulations of non-reactive shock tube and detonation tubes were performed. The numerical predictions that were obtained using different modeling parameters were compared with

  15. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  16. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  17. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  18. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  19. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  20. Computational effects of inlet representation on powered hypersonic, airbreathing models

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    Computational results are presented to illustrate the powered aftbody effects of representing the scramjet inlet on a generic hypersonic vehicle with a fairing, to divert the external flow, as compared to an operating flow-through scramjet inlet. This study is pertinent to the ground testing of hypersonic, airbreathing models employing scramjet exhaust flow simulation in typical small-scale hypersonic wind tunnels. The comparison of aftbody effects due to inlet representation is well-suited for computational study, since small model size typically precludes the ability to ingest flow into the inlet and perform exhaust simulation at the same time. Two-dimensional analysis indicates that, although flowfield differences exist for the two types of inlet representations, little, if any, difference in surface aftbody characteristics is caused by fairing over the inlet.

  1. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Adel Sarofim; Bene Risio

    2002-07-28

    This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No.: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of the IGCC workbench. A series of parametric CFD simulations for single stage and two stage generic gasifier configurations have been performed. An advanced flowing slag model has been implemented into the CFD based gasifier model. A literature review has been performed on published gasification kinetics. Reactor models have been developed and implemented into the workbench for the majority of the heat exchangers, gas clean up system and power generation system for the Vision 21 reference configuration. Modifications to the software infrastructure of the workbench have been commenced to allow interfacing to the workbench reactor models that utilize the CAPE{_}Open software interface protocol.

  2. HMcode: Halo-model matter power spectrum computation

    NASA Astrophysics Data System (ADS)

    Mead, Alexander

    2015-08-01

    HMcode computes the halo-model matter power spectrum. It is written in Fortran90 and has been designed to quickly (~0.5s for 200 k-values across 16 redshifts on a single core) produce matter spectra for a wide range of cosmological models. In testing it was shown to match spectra produced by the 'Coyote Emulator' to an accuracy of 5 per cent for k less than 10h Mpc^-1. However, it can also produce spectra well outside of the parameter space of the emulator.

  3. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics.

    PubMed

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems. PMID:27176426

  4. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  5. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  6. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  7. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines.

    PubMed

    Gansäuer, Andreas; Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca; Grimme, Stefan

    2013-01-01

    The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol(-1) and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG (‡) and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically.

  8. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines.

    PubMed

    Gansäuer, Andreas; Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca; Grimme, Stefan

    2013-01-01

    The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol(-1) and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG (‡) and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  9. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines

    PubMed Central

    Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca

    2013-01-01

    Summary The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol−1 and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG ‡ and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  10. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  11. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  12. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  13. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  14. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGES

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  15. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  16. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Khairallah, S. A.; Kamath, C.; Rubenchik, A. M.

    2015-12-15

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  17. Brain-computer interfaces: a powerful tool for scientific inquiry

    PubMed Central

    Wander, Jeremiah D; Rao, Rajesh P N

    2014-01-01

    Brain-computer interfaces (BCIs) are devices that record from the nervous system, provide input directly to the nervous system, or do both. Sensory BCIs such as cochlear implants have already had notable clinical success and motor BCIs have shown great promise for helping patients with severe motor deficits. Clinical and engineering outcomes aside, BCIs can also be tremendously powerful tools for scientific inquiry into the workings of the nervous system. They allow researchers to inject and record information at various stages of the system, permitting investigation of the brain in vivo and facilitating the reverse engineering of brain function. Most notably, BCIs are emerging as a novel experimental tool for investigating the tremendous adaptive capacity of the nervous system. PMID:24709603

  18. Brain-computer interfaces: a powerful tool for scientific inquiry.

    PubMed

    Wander, Jeremiah D; Rao, Rajesh P N

    2014-04-01

    Brain-computer interfaces (BCIs) are devices that record from the nervous system, provide input directly to the nervous system, or do both. Sensory BCIs such as cochlear implants have already had notable clinical success and motor BCIs have shown great promise for helping patients with severe motor deficits. Clinical and engineering outcomes aside, BCIs can also be tremendously powerful tools for scientific inquiry into the workings of the nervous system. They allow researchers to inject and record information at various stages of the system, permitting investigation of the brain in vivo and facilitating the reverse engineering of brain function. Most notably, BCIs are emerging as a novel experimental tool for investigating the tremendous adaptive capacity of the nervous system.

  19. Intrinsic universality and the computational power of self-assembly.

    PubMed

    Woods, Damien

    2015-07-28

    Molecular self-assembly, the formation of large structures by small pieces of matter sticking together according to simple local interactions, is a ubiquitous phenomenon. A challenging engineering goal is to design a few molecules so that large numbers of them can self-assemble into desired complicated target objects. Indeed, we would like to understand the ultimate capabilities and limitations of this bottom-up fabrication process. We look to theoretical models of algorithmic self-assembly, where small square tiles stick together according to simple local rules in order to carry out a crystal growth process. In this survey, we focus on the use of simulation between such models to classify and separate their computational and expressive powers. Roughly speaking, one model simulates another if they grow the same structures, via the same dynamical growth processes. Our journey begins with the result that there is a single intrinsically universal tile set that, with appropriate initialization and spatial scaling, simulates any instance of Winfree's abstract Tile Assembly Model. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system in a very direct way. From there we find that there is no such tile set in the more restrictive non-cooperative model, proving it weaker than the full Tile Assembly Model. In the two-handed model, where large structures can bind together in one step, we encounter an infinite set of infinite hierarchies of strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that simulates any tile assembly system. We find another tile that aperiodically tiles the plane (but with small gaps). These and other recent results show that simulation is giving rise to a kind of computational complexity theory for self-assembly. It seems this could be the beginning of a much longer journey

  20. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-04-25

    This is the tenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two gasifier types. An improved process model for simulating entrained flow gasifiers has been implemented into the workbench. Model development has focused on: a pre-processor module to compute global gasification parameters from standard fuel properties and intrinsic rate information; a membrane based water gas shift; and reactors to oxidize fuel cell exhaust gas. The data visualization capabilities of the workbench have been extended by implementing the VTK visualization software that supports advanced visualization methods, including inexpensive Virtual Reality techniques. The ease-of-use, functionality and plug-and-play features of the workbench were highlighted through demonstrations of the workbench at a DOE sponsored coal utilization conference. A white paper has been completed that contains recommendations on the use of component architectures, model interface protocols and software frameworks for developing a Vision 21 plant simulator.

  1. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  2. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  3. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  4. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  5. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  6. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  7. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  8. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  9. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  10. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  11. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  12. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda A.; Ratterman, Joseph D.; Smith, Brian E.

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  13. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  14. Dynamic effect of sodium-water reaction in fast flux test facility power addition sodium pipes

    SciTech Connect

    Huang, S.N.; Anderson, M.J.

    1990-03-01

    The Fast Flux Facility (FFTF) is a demonstration and test facility of the sodium-cooled fast breeder reactor. A power addition'' to the facility is being considered to convert some of the dumped, unused heat into electricity generation. Components and piping systems to be added are sodium-water steam generators, sodium loop extensions from existing dump heat exchangers to sodium-water steam generators, and conventional water/steam loops. The sodium loops can be subjected to the dynamic loadings of pressure pulses that are caused by postulated sodium leaks and subsequent sodium-water reaction in the steam generator. The existing FFTF secondary pipes and the new power addition sodium loops were evaluated for exposure to the dynamic effect of the sodium-water reaction. Elastic and simplified inelastic dynamic analyses were used in this feasibility study. The results indicate that both the maximum strain and strain range are within the allowable limits. Several cycles of the sodium-water reaction can be sustained by the sodium pipes that are supported by ordinary pipe supports and seismic restraints. Expensive axial pipe restraints to withstand the sodium-water reaction loads are not needed, because the pressure-pulse-induced alternating bending stresses act as secondary stresses and the pressure pulse dynamic effect is a deformation-controlled quantity and is self-limiting. 14 refs., 7 figs., 3 tabs.

  15. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  16. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  17. Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.

    PubMed

    Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E

    2013-05-01

    Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.

  18. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  19. Increasing the Power of a University Computing System with Attached Array Processors.

    ERIC Educational Resources Information Center

    Grimison, Alec

    1982-01-01

    Array processors are emerging as one cost-effective way of increasing the computing power of existing university computer systems. Two array processor installations at Cornell University and implications for other colleges and universities are discussed. (Author/JN)

  20. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-31

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three.

  1. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-04-30

    This is the sixth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of our IGCC workbench. Preliminary CFD simulations for single stage and two stage ''generic'' gasifiers using firing conditions based on the Vision 21 reference configuration have been performed. Work is continuing on implementing an advanced slagging model into the CFD based gasifier model. An investigation into published gasification kinetics has highlighted a wide variance in predicted performance due to the choice of kinetic parameters. A plan has been outlined for developing the reactor models required to simulate the heat transfer and gas clean up equipment downstream of the gasifier. Three models that utilize the CCA software protocol have been integrated into a version of the IGCC workbench. Tests of a CCA implementation of our CFD code into the workbench demonstrated that the CCA CFD module can execute on a geographically remote PC (linked via the Internet) in a manner that is transparent to the user. Software tools to create ''walk-through'' visualizations of the flow field within a gasifier have been demonstrated.

  2. How to produce personality neuroscience research with high statistical power and low additional cost.

    PubMed

    Mar, Raymond A; Spreng, R Nathan; Deyoung, Colin G

    2013-09-01

    Personality neuroscience involves examining relations between cognitive or behavioral variability and neural variables like brain structure and function. Such studies have uncovered a number of fascinating associations but require large samples, which are expensive to collect. Here, we propose a system that capitalizes on neuroimaging data commonly collected for separate purposes and combines it with new behavioral data to test novel hypotheses. Specifically, we suggest that groups of researchers compile a database of structural (i.e., anatomical) and resting-state functional scans produced for other task-based investigations and pair these data with contact information for the participants who contributed the data. This contact information can then be used to collect additional cognitive, behavioral, or individual-difference data that are then reassociated with the neuroimaging data for analysis. This would allow for novel hypotheses regarding brain-behavior relations to be tested on the basis of large sample sizes (with adequate statistical power) for low additional cost. This idea can be implemented at small scales at single institutions, among a group of collaborating researchers, or perhaps even within a single lab. It can also be implemented at a large scale across institutions, although doing so would entail a number of additional complications.

  3. Complex additive systems for Mn-Zn ferrites with low power loss

    SciTech Connect

    Töpfer, J. Angermann, A.

    2015-05-07

    Mn-Zn ferrites were prepared via an oxalate-based wet-chemical synthesis process. Nanocrystalline ferrite powders with particle size of 50 nm were sintered at 1150 °C with 500 ppm CaO and 100 ppm SiO{sub 2} as standard additives. A fine-grained, dense microstructure with grain size of 4–5 μm was obtained. Simultaneous addition of Nb{sub 2}O{sub 5}, ZrO{sub 2}, V{sub 2}O{sub 5}, and SnO{sub 2} results low power losses, e.g., 65 mW/cm{sup 3} (500 kHz, 50 mT, 80 °C) and 55 mW/cm{sup 3} (1 MHz, 25 mT, 80 °C). Loss analysis shows that eddy current and residual losses were minimized through formation of insulating grain boundary phases, which is confirmed by transmission electron microscopy. Addition of SnO{sub 2} increases the ferrous ion concentration and affects anisotropy as reflected in permeability measurements μ(T)

  4. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  5. Power conversion efficiency enhancement in OPV devices using spin 1/2 molecular additives

    NASA Astrophysics Data System (ADS)

    Basel, Tek; Vardeny, Valy; Yu, Luping

    2014-03-01

    We investigated the power conversion efficiency of bulk heterojunction OPV cells based on the low bandgap polymer PTB7, blend with C61-PCBM. We also employed the technique of photo-induced absorption, PA; electrical and magneto-PA (MPA) techniques to understand the details of the photocurrent generation process in this blend. We found that spin 1/2 molecular additives, such as Galvinoxyl (Gxl) radicals dramatically enhance the cell efficiency; we obtained 20% increase in photocurrent upon Gxl doping with 2% weight. We explain our finding by the ability of the spin 1/2 radicals to interfere with the known major loss mechanism in the cell due to recombination of charge transfer exciton at the D-A interface via triplet excitons in the polymer donors. Supported by National Science Foundation-Material Science & Engineering Center (NSF-MRSEC), University of Utah.

  6. Avoiding Split Attention in Computer-Based Testing: Is Neglecting Additional Information Facilitative?

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Janssen, Noortje; Kirschner, Paul A.; Erkens, Gijsbert

    2015-01-01

    This study investigated whether design guidelines for computer-based learning can be applied to computer-based testing (CBT). Twenty-two students completed a CBT exam with half of the questions presented in a split-screen format that was analogous to the original paper-and-pencil version and half in an integrated format. Results show that students…

  7. Addition of flexible body option to the TOLA computer program. Part 2: User and programmer documentation

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    User and programmer oriented documentation for the flexible body option of the Takeoff and Landing Analysis (TOLA) computer program are provided. The user information provides sufficient knowledge of the development and use of the option to enable the engineering user to successfully operate the modified program and understand the results. The programmer's information describes the option structure and logic enabling a programmer to make major revisions to this part of the TOLA computer program.

  8. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  9. Companies Reaching for the Clouds for Computing Power

    SciTech Connect

    Madison, Alison L.

    2012-10-07

    By now, we’ve likely all at least heard of cloud computing, and to some extent may grasp what it’s all about. But after delving into a recent article in The New York Times, I came to realize just how big of a deal it is--much bigger than my own limited experience with it had allowed me to see. Cloud computing is the use of hardware or software computing resources that are delivered as a service over a network, typically via the web. The gist of it is, almost anything you can imagine doing with your computer system doesn’t have to physically exist on your system or in your office in order to be accessible to you. You can entrust remote services with your data, software, and computation. It’s easier, and also much less expensive.

  10. Can Computer-Assisted Discovery Learning Foster First Graders' Fluency with the Most Basic Addition Combinations?

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Eiland, Michael D.; Purpura, David J.; Reid, Erin E.

    2013-01-01

    In a 9-month training experiment, 64 first graders with a risk factor were randomly assigned to computer-assisted structured discovery of the add-1 rule (e.g., the sum of 7 + 1 is the number after "seven" when we count), unstructured discovery learning of this regularity, or an active-control group. Planned contrasts revealed that the add-1…

  11. Computers, Invention, and the Power to Change Student Writing.

    ERIC Educational Resources Information Center

    Strickland, James

    1987-01-01

    Appraises the computer as a prewriting aid. Evaluates both the quality and quantity of ideas produced by various invention techniques and programs, and compares results of similar studies by Hugh Burns and Helen Schwartz. (NKA)

  12. Measurements and computations of electromagnetic fields in electric power substations

    SciTech Connect

    Daily, W.K. ); Dawalibi, F. )

    1994-01-01

    The magnetic fields generated by a typical distribution substation were measured and calculated based on a computer model which takes into account currents in the grounding systems, distribution feeder neutrals, overhead ground wires and induced currents in equipment structures and ground grid loops. Both measured and computer results indicate that magnetic fields are significantly influenced by ground currents, as well as induced currents in structures and ground system loops. All currents in the network modeled were computed, based on the measured currents impressed at the boundary points (ends of the conductor network). The agreement between the measured and computer values is good. Small differences were observed and are attributed mainly to uncertainties in the geometry of the network model and phase angles of some of the currents in the neutral conductors which were not measured in the field. Further measurements, including more accurate geometrical information and phase angles, are planned.

  13. The computational power of time dilation in special relativity

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob

    2014-03-01

    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic Grover speedup from quantum computing and an n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation. Parts of this talk are based on [J.Phys.Conf.Ser. 229:012020 (2010), arXiv:0907.1579]. Support is acknowledged from the Foundational Questions Institute (FQXi) and the Compagnia di San Paolo Foundation.

  14. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  15. Identification of Students' Intuitive Mental Computational Strategies for 1, 2 and 3 Digits Addition and Subtraction: Pedagogical and Curricular Implications

    ERIC Educational Resources Information Center

    Ghazali, Munirah; Alias, Rohana; Ariffin, Noor Asrul Anuar; Ayub, Ayminsyadora

    2010-01-01

    This paper reports on a study to examine mental computation strategies used by Year 1, Year 2, and Year 3 students to solve addition and subtraction problems. The participants in this study were twenty five 7 to 9 year-old students identified as excellent, good and satisfactory in their mathematics performance from a school in Penang, Malaysia.…

  16. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  17. Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Harder, R. L.

    1973-01-01

    A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.

  18. Restructuring the introductory physics lab with the addition of computer-based laboratories.

    PubMed

    Pierri-Galvao, Monica

    2011-07-01

    Nowadays, data acquisition software and sensors are being widely used in introductory physics laboratories. This allows the student to spend more time exploring the data that is collected by the computer hence focusing more on the physical concept. Very often, a faculty is faced with the challenge of updating or introducing a microcomputer-based laboratory (MBL) at his or her institution. This article will provide a list of experiments and equipment needed to convert about half of the traditional labs on a 1-year introductory physics lab into MBLs.

  19. Five Mass Power Transmission Line of a Ship Computer Modelling

    NASA Astrophysics Data System (ADS)

    Kazakoff, Alexander Borisoff; Marinov, Boycho Ivanov

    2016-03-01

    The work, presented in this paper, appears to be a natural continuation of the work presented and reported before, on the design of power transmission line of a ship, but with different multi-mass model. Some data from the previous investigations are used as a reference data, mainly from the analytical investigations, for the developed in the previ- ous study, frequency and modal analysis of a five mass model of a power transmission line of a ship. In the paper, a profound dynamic analysis of a concrete five mass dynamic model of the power transmission line of a ship is performed using Finite Element Analysis (FEA), based on the previously recommended model, investigated in the previous research and reported before. Thus, the partially validated by frequency analysis five mass model of a power transmission line of a ship is subjected to dynamic analysis. The objective of the work presented in this paper is dynamic modelling of a five mass transmission line of a ship, partial validation of the model and von Mises stress analysis calculation with the help of Finite Element Analysis (FEA) and comparison of the derived results with the analytically calculated values. The partially validated five mass power transmission line of a ship can be used for definition of many dy- namic parameters, particularly amplitude of displacement, velocity and acceleration, respectively in time and frequency domain. The frequency behaviour of the model parameters is investigated in frequency domain and it corresponds to the predicted one.

  20. Computational tool for simulation of power and refrigeration cycles

    NASA Astrophysics Data System (ADS)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  1. Designing high power targets with computational fluid dynamics (CFD)

    SciTech Connect

    Covrig, S. D.

    2013-11-07

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  2. Designing high power targets with computational fluid dynamics (CFD)

    SciTech Connect

    Covrig, Silviu D.

    2013-11-01

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 {micro}A rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 {micro}A beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  3. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  4. Harmonic Resonance in Power Transmission Systems due to the Addition of Shunt Capacitors

    NASA Astrophysics Data System (ADS)

    Patil, Hardik U.

    Shunt capacitors are often added in transmission networks at suitable locations to improve the voltage profile. In this thesis, the transmission system in Arizona is considered as a test bed. Many shunt capacitors already exist in the Arizona transmission system and more are planned to be added. Addition of these shunt capacitors may create resonance conditions in response to harmonic voltages and currents. Such resonance, if it occurs, may create problematic issues in the system. It is main objective of this thesis to identify potential problematic effects that could occur after placing new shunt capacitors at selected buses in the Arizona network. Part of the objective is to create a systematic plan for avoidance of resonance issues. For this study, a method of capacitance scan is proposed. The bus admittance matrix is used as a model of the networked transmission system. The calculations on the admittance matrix were done using Matlab. The test bed is the actual transmission system in Arizona; however, for proprietary reasons, bus names are masked in the thesis copy intended for the public domain. The admittance matrix was obtained from data using the PowerWorld Simulator after equivalencing the 2016 summer peak load (planning case). The full Western Electricity Coordinating Council (WECC) system data were used. The equivalencing procedure retains only the Arizona portion of the WECC. The capacitor scan results for single capacitor placement and multiple capacitor placement cases are presented. Problematic cases are identified in the form of 'forbidden response. The harmonic voltage impact of known sources of harmonics, mainly large scale HVDC sources, is also presented. Specific key results for the study indicated include: (1) The forbidden zones obtained as per the IEEE 519 standard indicates the bus 10 to be the most problematic bus. (2) The forbidden zones also indicate that switching values for the switched shunt capacitor (if used) at bus 3 should be

  5. Computers, Invention, and the Power to Change Student Writing.

    ERIC Educational Resources Information Center

    Strickland, James

    A study examined the quantity and quality of ideas produced in freshman composition students' writing to determine whether computer assisted instruction (CAI) stimulates invention as well as or better than current invention instruction in traditional classrooms. Two CAI programs were used: QUEST, the systematic program that examines an item/event…

  6. Powering Down from the Bottom up: Greener Client Computing

    ERIC Educational Resources Information Center

    O'Donnell, Tom

    2009-01-01

    A decade ago, people wanting to practice "green computing" recycled their printer paper, turned their personal desktop systems off from time to time, and tried their best to donate old equipment to a nonprofit instead of throwing it away. A campus IT department can shave a few watts off just about any IT process--the real trick is planning and…

  7. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    SciTech Connect

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documents have also been provided to KHNP-CRI.

  8. Computer control of the high-voltage power supply for the DIII-D Electron Cyclotron Heating system

    SciTech Connect

    Clow, D.D.; Kellman, D.H.

    1991-10-01

    The D3-D Electron Cyclotron Heating (ECH) high voltage power supply is controlled by a computer. Operational control is input via keyboard and mouse, and computer/power supply interface is accomplished with a Computer Assisted Monitoring and Control (CAMAC) system. User-friendly tools allow the design and layout of simulated control panels on the computer screen. Panel controls and indicators can be changed, added or deleted, and simple editing of user-specific processes can quickly modify control and fault logic. Databases can be defined, and control panel functions are easily referred to various data channels. User-specific processes are written and linked using Fortran, to manage control and data acquisition through CAMAC. The resulting control system has significant advantages over the hardware it emulates: changes in logic, layout, and function are quickly and easily incorporated; data storage, retrieval, and processing are flexible and simply accomplished, physical components subject to wear and degradation are minimized. In addition, the system can be expanded to multiplex control of several power supplied, each with its own database, through a single computer and console. 5 refs., 4 figs., 1 tab.

  9. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  10. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  11. Addition of visual noise boosts evoked potential-based brain-computer interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  12. Computer simulation of the scaled power bipolar SHF transistor structures

    NASA Astrophysics Data System (ADS)

    Nelayev, V. V.; Efremov, V. A.; Snitovsky, Yu. P.

    2007-04-01

    New advanced technology for creation of the npn power silicon bipolar SHF transistor structure is proposed. Preferences of the advanced technology in comparison with standard technology are demonstrated. Simulation of both technology flows was performed with emphasis on scaling of the discussed device structure.

  13. Theory and computer simulation for the equation of state of additive hard-disk fluid mixtures

    NASA Astrophysics Data System (ADS)

    Barrio, C.; Solana, J. R.

    2001-01-01

    A procedure previously developed by the authors to obtain the equation of state for a mixture of additive hard spheres on the basis of a pure fluid equation of state is applied here to a binary mixture of additive hard disks in two dimensions. The equation of state depends on two parameters which are determined from the second and third virial coefficients for the mixture, which are known exactly. Results are compared with Monte Carlo calculations which are also reported. The agreement between theory and simulation is very good. For the fourth and fifth virial coefficients of the mixture, the equation of state gives results which are also in close agreement with exact numerical values reported in the literature.

  14. Computer program for afterheat temperature distribution for mobile nuclear power plant

    NASA Technical Reports Server (NTRS)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  15. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. PMID:26829769

  16. MORT: a powerful foundational library for computational biology and CADD

    PubMed Central

    2014-01-01

    Background A foundational library called MORT (Molecular Objects and Relevant Templates) for the development of new software packages and tools employed in computational biology and computer-aided drug design (CADD) is described here. Results MORT contains several advantages compared with the other libraries. Firstly, MORT written in C++ natively supports the paradigm of object-oriented design, and thus it can be understood and extended easily. Secondly, MORT employs the relational model to represent a molecule, and it is more convenient and flexible than the traditional hierarchical model employed by many other libraries. Thirdly, a lot of functions have been included in this library, and a molecule can be manipulated easily at different levels. For example, it can parse a variety of popular molecular formats (MOL/SDF, MOL2, PDB/ENT, SMILES/SMARTS, etc.), create the topology and coordinate files for the simulations supported by AMBER, calculate the energy of a specific molecule based on the AMBER force fields, etc. Conclusions We believe that MORT can be used as a foundational library for programmers to develop new programs and applications for computational biology and CADD. Source code of MORT is available at http://cadd.suda.edu.cn/MORT/index.htm.

  17. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution.

  18. Integration of computer systems for California aqueduct power plant systems

    SciTech Connect

    Delfin, E.L. ); Gaushell, D.J. )

    1993-06-01

    The California State Water Project is one of the largest water and power systems in the world and includes over 130 hydroelectric units. This paper provides an overview of the planning and implementation of the control and communication systems replacement for the entire Project. New control system features include a multi-agency control center, off-site backup control center, four area control systems, ten major pumping/generating plant control systems, and a 400 mile fiber optic communication system.

  19. Enantioselective conjugate addition of nitro compounds to α,β-unsaturated ketones: an experimental and computational study.

    PubMed

    Manzano, Rubén; Andrés, José M; Álvarez, Rosana; Muruzábal, María D; de Lera, Ángel R; Pedrosa, Rafael

    2011-05-16

    A series of chiral thioureas derived from easily available diamines, prepared from α-amino acids, have been tested as catalysts in the enantioselective Michael additions of nitroalkanes to α,β-unsaturated ketones. The best results are obtained with the bifunctional catalyst prepared from L-valine. This thiourea promotes the reaction with high enantioselectivities and chemical yields for aryl/vinyl ketones, but the enantiomeric ratio for alkyl/vinyl derivatives is very modest. The addition of substituted nitromethanes led to the corresponding adducts with excellent enantioselectivity but very poor diastereoselectivity. Evidence for the isomerization of the addition products has been obtained from the reaction of chalcone with [D(3)]nitromethane, which shows that the final addition products epimerize under the reaction conditions. The epimerization explains the low diastereoselectivity observed in the formation of adducts with two adjacent tertiary stereocenters. Density functional studies of the transition structures corresponding to two alternative activation modes of the nitroalkanes and α,β-unsaturated ketones by the bifunctional organocatalyst have been carried out at the B3LYP/3-21G* level. The computations are consistent with a reaction model involving the Michael addition of the thiourea-activated nitronate to the ketone activated by the protonated amine of the organocatalyst. The enantioselectivities predicted by the computations are consistent with the experimental values obtained for aryl- and alkyl-substituted α,β-unsaturated ketones.

  20. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  1. Energy Use and Power Levels in New Monitors and Personal Computers

    SciTech Connect

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  2. Computer simulation of magnetization-controlled shunt reactors for calculating electromagnetic transients in power systems

    SciTech Connect

    Karpov, A. S.

    2013-01-15

    A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.

  3. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  4. PowerPoint Presentations: A Creative Addition to the Research Process.

    ERIC Educational Resources Information Center

    Perry, Alan E.

    2003-01-01

    Contends that the requirement of a PowerPoint presentation as part of the research process would benefit students in the following ways: learning how to conduct research; starting their research project sooner; honing presentation and public speaking skills; improving cooperative and social skills; and enhancing technology skills. Outlines the…

  5. CIDER: Enabling Robustness-Power Tradeoffs on a Computational Eyeglass

    PubMed Central

    Mayberry, Addison; Tun, Yamin; Hu, Pan; Smith-Freedman, Duncan; Ganesan, Deepak; Marlin, Benjamin; Salthouse, Christopher

    2016-01-01

    The human eye offers a fascinating window into an individual’s health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. The challenges lie in: a) handling the complexity of continuous high-rate sensing from a camera and processing the image stream to estimate eye parameters, and b) dealing with the wide variability in illumination conditions in the natural environment. This paper explores the power–robustness tradeoffs inherent in the design of a wearable eye tracker, and proposes a novel staged architecture that enables graceful adaptation across the spectrum of real-world illumination. We propose CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared, b) error in estimating pupil center and pupil dilation, and c) model training procedures that involve zero effort from a user. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz. PMID:27042165

  6. Enhancing the human-computer interface of power system applications

    SciTech Connect

    Azevedo, G.P. de; Souza, C.S. de; Feijo, B.

    1995-12-31

    This paper examines a topic of increasing importance: the interpretation of the massive amount of data available to power system engineers. The solutions currently adopted in the presentation of data in graphical interfaces are discussed. It is demonstrated that the representations of electric diagrams can be considerably enhanced through the adequate exploitation of resources available in full-graphics screens and the use of basic concepts from human-factors research. Enhanced representations of electric diagrams are proposed and tested. The objective is to let the user see the behavior of the system, allowing for better interpretation of program data and results and improving user`s productivity.

  7. Negative capacitance for ultra-low power computing

    NASA Astrophysics Data System (ADS)

    Khan, Asif Islam

    Owing to the fundamental physics of the Boltzmann distribution, the ever-increasing power dissipation in nanoscale transistors threatens an end to the almost-four-decade-old cadence of continued performance improvement in complementary metal-oxide-semiconductor (CMOS) technology. It is now agreed that the introduction of new physics into the operation of field-effect transistors---in other words, "reinventing the transistor'"--- is required to avert such a bottleneck. In this dissertation, we present the experimental demonstration of a novel physical phenomenon, called the negative capacitance effect in ferroelectric oxides, which could dramatically reduce power dissipation in nanoscale transistors. It was theoretically proposed in 2008 that by introducing a ferroelectric negative capacitance material into the gate oxide of a metal-oxide-semiconductor field-effect transistor (MOSFET), the subthreshold slope could be reduced below the fundamental Boltzmann limit of 60 mV/dec, which, in turn, could arbitrarily lower the power supply voltage and the power dissipation. The research presented in this dissertation establishes the theoretical concept of ferroelectric negative capacitance as an experimentally verified fact. The main results presented in this dissertation are threefold. To start, we present the first direct measurement of negative capacitance in isolated, single crystalline, epitaxially grown thin film capacitors of ferroelectric Pb(Zr0.2Ti0.8)O3. By constructing a simple resistor-ferroelectric capacitor series circuit, we show that, during ferroelectric switching, the ferroelectric voltage decreases, while the stored charge in it increases, which directly shows a negative slope in the charge-voltage characteristics of a ferroelectric capacitor. Such a situation is completely opposite to what would be observed in a regular resistor-positive capacitor series circuit. This measurement could serve as a canonical test for negative capacitance in any novel

  8. Enhancing the human-computer interface of power system applications

    SciTech Connect

    Azevedo, G.P. de; Souza, C.S. de; Feijo, B.

    1996-05-01

    This paper examines a topic of increasing importance: the interpretation of the massive amount of data available to power system engineers. The solutions currently adopted in the presentation of data in graphical interfaces are discussed. It is demonstrated that the representations of electric diagrams can be considerably enhanced through the adequate exploitation of resources available in full-graphics screens and the use of basic concepts from human-factors research. Enhanced representations of electric diagrams are proposed and tested. The objective is to let the user ``see`` the behavior of the system, allowing for better interpretation of program data and results and improving user`s productivity.

  9. Computer controlled pump unit cuts power, increases output

    SciTech Connect

    Rosman, A.; Nofal, M.

    1996-11-01

    OroNegro, Inc., a small, high-tech Southern California operating company with a stated mission to find and utilize innovations that lower production costs, adopted that philosophy in applying a new sucker rod pumping system in its shallow, heavy oil fields in Newhall and Bakersfield, California. Six new hydraulic, computer-controlled pumping (CCP) units developed and supplied by DynaPump, Inc., also of Southern California, have been installed, and are producing significant operating and economic benefits. Basic CCP unit features include a very long stroke with a charged gas (nitrogen) counterbalance and automatic computer-controlled speed, to maximize flow. In one case described, an industry-standard 456 (456,000 in. lb torque), 100-hp unit was replaced by a 60-hp CCP unit, nearly doubling pump output. Field installations and pumping systems in Newhall field, and tests in Kern Front field are described, along with the operator`s views on other CCP applications, including its use in deep wells.

  10. Power Computations in Time Series Analyses for Traffic Safety Interventions

    PubMed Central

    McLeod, A. Ian; Vingilis, E. R.

    2008-01-01

    The evaluation of traffic safety interventions or other policies that can affect road safety often requires the collection of administrative time series data, such as monthly motor vehicle collision data that may be difficult and/or expensive to collect. Furthermore, since policy decisions may be based on the results found from the intervention analysis of the policy, it is important to ensure that the statistical tests have enough power, that is, that we have collected enough time series data both before and after the intervention so that a meaningful change in the series will likely be detected. In this short paper we present a simple methodology for doing this. It is expected that the methodology presented will be useful for sample size determination in a wide variety of traffic safety intervention analysis applications. Our method is illustrated with a proposed traffic safety study that was funded by NIH. PMID:18460394

  11. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  12. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  13. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  14. Computations on the primary photoreaction of Br2 with CO2: stepwise vs concerted addition of Br atoms.

    PubMed

    Xu, Kewei; Korter, Timothy M; Braiman, Mark S

    2015-04-01

    It was proposed previously that Br2-sensitized photolysis of liquid CO2 proceeds through a metastable primary photoproduct, CO2Br2. Possible mechanisms for such a photoreaction are explored here computationally. First, it is shown that the CO2Br radical is not stable in any geometry. This rules out a free-radical mechanism, for example, photochemical splitting of Br2 followed by stepwise addition of Br atoms to CO2-which in turn accounts for the lack of previously observed Br2+CO2 photochemistry in gas phases. A possible alternative mechanism in liquid phase is formation of a weakly bound CO2:Br2 complex, followed by concerted photoaddition of Br2. This hypothesis is suggested by the previously published spectroscopic detection of a binary CO2:Br2 complex in the supersonically cooled gas phase. We compute a global binding-energy minimum of -6.2 kJ mol(-1) for such complexes, in a linear geometry. Two additional local minima were computed for perpendicular (C2v) and nearly parallel asymmetric planar geometries, both with binding energies near -5.4 kJ mol(-1). In these two latter geometries, C-Br and O-Br bond distances are simultaneously in the range of 3.5-3.8 Å, that is, perhaps suitable for a concerted photoaddition under the temperature and pressure conditions where Br2 + CO2 photochemistry has been observed.

  15. Evaluation of computer-aided design and drafting for the electric power industry. Final report

    SciTech Connect

    Anuskiewicz, T.; Barduhn, G.; Lowther, B.; Osman, I.

    1984-01-01

    This report reviews current and future computer-aided design and drafting (CADD) technology relative to utility needs and to identify useful development projects that may be undertaken by EPRI. The principal conclusions are that computer aids offer substantial cost and time savings and that computer systems are being developed to take advantage of the savings. Data bases are not available for direct communication between computers used by the power industry and will limit benefits to the industry. Recommendations are made for EPRI to take the initiative to develop the data bases for direct communication between power industry computers and to research, develop, and demonstrate new applications within the industry. Key components of a CADD system are described. The state of the art of two- and three-dimensional CADD systems to perform graphics and project management control functions are assessed. Comparison is made of three-dimensional electronic models and plastic models.

  16. Biologically relevant molecular transducer with increased computing power and iterative abilities.

    PubMed

    Ratner, Tamar; Piran, Ron; Jonoska, Natasha; Keinan, Ehud

    2013-05-23

    As computing devices, which process data and interconvert information, transducers can encode new information and use their output for subsequent computing, offering high computational power that may be equivalent to a universal Turing machine. We report on an experimental DNA-based molecular transducer that computes iteratively and produces biologically relevant outputs. As a proof of concept, the transducer accomplished division of numbers by 3. The iterative power was demonstrated by a recursive application on an obtained output. This device reads plasmids as input and processes the information according to a predetermined algorithm, which is represented by molecular software. The device writes new information on the plasmid using hardware that comprises DNA-manipulating enzymes. The computation produces dual output: a quotient, represented by newly encoded DNA, and a remainder, represented by E. coli phenotypes. This device algorithmically manipulates genetic codes. PMID:23706637

  17. The effectiveness of power-generating complexes constructed on the basis of nuclear power plants combined with additional sources of energy determined taking risk factors into account

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.

    2015-02-01

    The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.

  18. Evaluation of Different Power of Near Addition in Two Different Multifocal Intraocular Lenses

    PubMed Central

    Unsal, Ugur; Baser, Gonen

    2016-01-01

    Purpose. To compare near, intermediate, and distance vision and quality of vision, when refractive rotational multifocal intraocular lenses with 3.0 diopters or diffractive multifocal intraocular lenses with 2.5 diopters near addition are implanted. Methods. 41 eyes of 41 patients in whom rotational +3.0 diopters near addition IOLs were implanted and 30 eyes of 30 patients in whom diffractive +2.5 diopters near addition IOLs were implanted after cataract surgery were reviewed. Uncorrected and corrected distance visual acuity, intermediate visual acuity, near visual acuity, and patient satisfaction were evaluated 6 months later. Results. The corrected and uncorrected distance visual acuity were the same between both groups (p = 0.50 and p = 0.509, resp.). The uncorrected intermediate and corrected intermediate and near vision acuities were better in the +2.5 near vision added intraocular lens implanted group (p = 0.049, p = 0.005, and p = 0.001, resp.) and the uncorrected near vision acuity was better in the +3.0 near vision added intraocular lens implanted group (p = 0.001). The patient satisfactions of both groups were similar. Conclusion. The +2.5 diopters near addition could be a better choice in younger patients with more distance and intermediate visual requirements (driving, outdoor activities), whereas the + 3.0 diopters should be considered for patients with more near vision correction (reading). PMID:27340560

  19. EXAMINING PLANNED U.S. POWER PLANT CAPACITY ADDITIONS IN THE CONTEXT OF CLIMATE CHANGE

    SciTech Connect

    Dooley, James J.; Dahowski, Robert T.; Gale, J.; Kaya, Y.

    2003-01-01

    This paper seeks to assess the degree to which the 471 planned fossil fired power plants announced to be built within the next decade in the continental U.S. are amenable to significant carbon dioxide emissions mitigation via carbon dioxide capture and disposal in geologic reservoirs. The combined generating capacity of these 471 planned plants is 320 GW. In particular, we seek to assess the looming ''carbon liability'' (i.e., the nearly 1 billion tons of CO2 these plants are likely to emit annually) that these power plants represent for their owners and for the nation as the U.S. begins to address climate change. Significant emission reductions will likely be brought about through the use of advanced technologies such as carbon capture and disposal. We find that less than half of these plants are located in the immediate vicinity of potentially suitable geologic carbon dioxide disposal reservoirs. The authors discuss the implications of this potential carbon liability that these plants may come to represent.

  20. Origins of stereoselectivity in the Diels-Alder addition of chiral hydroxyalkyl vinyl ketones to cyclopentadiene: a quantitative computational study.

    PubMed

    Bakalova, Snezhana M; Kaneti, Jose

    2008-12-18

    Modest basis set level MP2/6-31G(d,p) calculations on the Diels-Alder addition of S-1-alkyl-1-hydroxy-but-3-en-2-ones (1-hydroxy-1-alkyl methyl vinyl ketones) to cyclopentadiene correctly reproduce the trends in known experimental endo/exo and diastereoface selectivity. B3LYP theoretical results at the same or significantly higher basis set level, on the other hand, do not satisfactorily model observed endo/exo selectivities and are thus unsuitable for quantitative studies. The same is valid also with regard to subtle effects originating from, for example, conformational distributions of reactants. The latter shortcomings are not alleviated by the fact that observed diastereoface selectivities are well-reproduced by DFT calculations. Quantitative computational studies of large cycloaddition systems would require higher basis sets and better account for electron correlation than MP2, such as, for example, CCSD. Presently, however, with 30 or more non-hydrogen atoms, these computations are hardly feasible. We present quantitatively correct stereochemical predictions using a hybrid layered ONIOM computational approach, including the chiral carbon atom and the intramolecular hydrogen bond into a higher level, MP2/6-311G(d,p) or CCSD/6-311G(d,p), layer. Significant computational economy is achieved by taking account of surrounding bulky (alkyl) residues at 6-31G(d) in a low HF theoretical level layer. We conclude that theoretical calculations based on explicit correlated MO treatment of the reaction site are sufficiently reliable for the prediction of both endo/exo and diastereoface selectivity of Diels-Alder addition reactions. This is in line with the understanding of endo/exo selectivity originating from dynamic electron correlation effects of interacting pi fragments and diastereofacial selectivity originating from steric interactions of fragments outside of the Diels-Alder reaction site. PMID:18637663

  1. Possible applicability of artificial neural network hardware to power system computation

    SciTech Connect

    Connor, J.T.; Damborg, M.J.; Atlas, L.E. . Dept. of Electrical Engineering)

    1992-01-01

    The paper reviews 2 very distinct suggestions for using artificial neural network hardware in power systems. The majority of our discussion concerns taking advantage of the hardware for fine-grained parallel computation. We also discuss our experience with recurrent artificial neural networks for load forecasting. A constant theme in power system analysis is faster computation. Sometimes the need for speed is to implement analysis on-line while other times the need is simply to perform more computation to explore a problem more thoroughly. Computation speed has historically been sought through algorithms. In more current times, this search has been supplemented with attempts to complete parallel computation. These parallel approaches have typically involved a few CPUs on a supercomputer or up to 32 in hypercube experiments. The application of SIMD computers designed for neural network simulations to the problem of power flow calculations is discussed. Clustering techniques are introduced to enable power flow calculation times that are independent of system size. Results of recurrent network electric load forecasting are also discussed.

  2. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  3. Fourier, Gegenbauer and Jacobi Expansions for a Power-Law Fundamental Solution of the Polyharmonic Equation and Polyspherical Addition Theorems

    NASA Astrophysics Data System (ADS)

    Cohl, Howard S.

    2013-06-01

    We develop complex Jacobi, Gegenbauer and Chebyshev polynomial expansions for the kernels associated with power-law fundamental solutions of the polyharmonic equation on d-dimensional Euclidean space. From these series representations we derive Fourier expansions in certain rotationally-invariant coordinate systems and Gegenbauer polynomial expansions in Vilenkin's polyspherical coordinates. We compare both of these expansions to generate addition theorems for the azimuthal Fourier coefficients.

  4. Power levels in office equipment: Measurements of new monitors and personal computers

    SciTech Connect

    Roberson, Judy A.; Brown, Richard E.; Nordman, Bruce; Webber, Carrie A.; Homan, Gregory H.; Mahajan, Akshay; McWhinney, Marla; Koomey, Jonathan G.

    2002-05-14

    Electronic office equipment has proliferated rapidly over the last twenty years and is projected to continue growing in the future. Efforts to reduce the growth in office equipment energy use have focused on power management to reduce power consumption of electronic devices when not being used for their primary purpose. The EPA ENERGY STAR[registered trademark] program has been instrumental in gaining widespread support for power management in office equipment, and accurate information about the energy used by office equipment in all power levels is important to improving program design and evaluation. This paper presents the results of a field study conducted during 2001 to measure the power levels of new monitors and personal computers. We measured off, on, and low-power levels in about 60 units manufactured since July 2000. The paper summarizes power data collected, explores differences within the sample (e.g., between CRT and LCD monitors), and discusses some issues that arise in m etering office equipment. We also present conclusions to help improve the success of future power management programs.Our findings include a trend among monitor manufacturers to provide a single very low low-power level, and the need to standardize methods for measuring monitor on power, to more accurately estimate the annual energy consumption of office equipment, as well as actual and potential energy savings from power management.

  5. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  6. A computer program to determine the specific power of prismatic-core reactors

    SciTech Connect

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  7. Computational Research Challenges and Opportunities for the Optimization of Fossil Energy Power Generation System

    SciTech Connect

    Zitney, S.E.

    2007-06-01

    Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.

  8. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  9. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    NASA Astrophysics Data System (ADS)

    Singh, Arvinder; Chandra, Amreesh

    2016-05-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs.

  10. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes.

    PubMed

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  11. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  12. Characterization of Steel-Ta Dissimilar Metal Builds Made Using Very High Power Ultrasonic Additive Manufacturing (VHP-UAM)

    NASA Astrophysics Data System (ADS)

    Sridharan, Niyanth; Norfolk, Mark; Babu, Sudarsanam Suresh

    2016-05-01

    Ultrasonic additive manufacturing is a solid-state additive manufacturing technique that utilizes ultrasonic vibrations to bond metal tapes into near net-shaped components. The major advantage of this process is the ability to manufacture layered structures with dissimilar materials without any intermetallic formation. Majority of the published literature had focused only on the bond formation mechanism in Aluminum alloys. The current work pertains to explain the microstructure evolution during dissimilar joining of iron and tantalum using very high power ultrasonic additive manufacturing and characterization of the interfaces using electron back-scattered diffraction and Nano-indentation measurement. The results showed extensive grain refinement at the bonded interfaces of these metals. This phenomenon was attributed to continuous dynamic recrystallization process driven by the high strain rate plastic deformation and associated adiabatic heating that is well below 50 pct of melting point of both iron and Ta.

  13. Thread selection according to power characteristics during context switching on compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Randles, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2016-10-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  14. Thread selection according to predefined power characteristics during context switching on compute nodes

    DOEpatents

    None

    2013-06-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  15. A personal computer based interactive software for power system operation education

    SciTech Connect

    Hsu, Y.Y.; Yang, C.C. ); Su, C.C. )

    1992-11-01

    The use of a personal computer based interactive software to aid instruction in power system operation is described in this paper. The software is designed to be used as a teaching aid for the course Power System Operation at National Taiwan University. The main programs included in the package include short term load forecasting and unit commitment. Other supporting routines include power flow analysis, static security assessment, small signal stability analysis, and transient stability analysis. To promote the students' interest in the course, a user friendly interface and interactive windows have been developed. The integrated software package proves to be useful for educational and research purposes.

  16. Simulation tools for computer-aided design and numerical investigations of high-power gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Balabanova, E.; Kern, S.; Illy, S.; Sabchevski, S.; Thumm, M.; Vasileva, E.; Zhelyazkov, I.

    2012-03-01

    Modelling and simulation are essential tools for computer-aided design (CAD), analysis and optimization of high-power gyrotrons used as radiation sources for electron cyclotron resonance heating (ECRH) and current drive (ECCD) of magnetically confined plasmas in the thermonuclear reactor ITER. In this communication, we present the current status of our simulation tools and discuss their further development.

  17. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  18. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  19. Current status and future trends in computer modeling of high-power travelling-wave tubes

    SciTech Connect

    DeHope, W.J.

    1996-12-31

    The interaction of a slow electromagnetic wave and a linear propagating electron stream has been utilized for many years for microwave amplification. Pulsed devices of high peak and average power typically are based on periodic, filter-type circuits and interaction takes place on the first forward-wave branch of a fundamental backward-wave dispersion curve. These devices have served as useful test vehicles over the years in the development of advanced computational methods and models. A working relationship has thereby developed between the plasma computation community and the microwave tube industry. The talk will describe the operational principles and design steps in modern, high-power TWT development. The major computational stages that the industry has seen over the last four decades in both 2-d and 3-d modeling will be reviewed and comments made on their relevancy to current work and future trends.

  20. Computer program for design and performance analysis of navigation-aid power systems

    NASA Technical Reports Server (NTRS)

    Weiner, H.; Wiener, P.; Williams, K.

    1976-01-01

    The paper examines the requirements, design rationale, operation, and verification of a computer program designated as design synthesis/performance analysis (DSPA) computer program, which is capable of performing all the calculations necessary to understand the overall characteristics of solar array/battery power systems for navigation-aid applications. Despite the uncertainties in the erratic solar array degradation data and the potential impact on actual battery behavior, verification of the DSPA is considered successful. The program is shown to have the capability of simulating the performance of solar array/battery navigation-aid power systems. It can also be used to synthesize power system designs and provide essential design and cost data.

  1. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  2. Computer-based procedure for field activities: Results from three evaluations at nuclear power plants

    SciTech Connect

    Oxstrand, Johanna; bly, Aaron; LeBlanc, Katya

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  3. Linking process, structure, property, and performance for metal-based additive manufacturing: computational approaches with experimental support

    NASA Astrophysics Data System (ADS)

    Smith, Jacob; Xiong, Wei; Yan, Wentao; Lin, Stephen; Cheng, Puikei; Kafka, Orion L.; Wagner, Gregory J.; Cao, Jian; Liu, Wing Kam

    2016-04-01

    Additive manufacturing (AM) methods for rapid prototyping of 3D materials (3D printing) have become increasingly popular with a particular recent emphasis on those methods used for metallic materials. These processes typically involve an accumulation of cyclic phase changes. The widespread interest in these methods is largely stimulated by their unique ability to create components of considerable complexity. However, modeling such processes is exceedingly difficult due to the highly localized and drastic material evolution that often occurs over the course of the manufacture time of each component. Final product characterization and validation are currently driven primarily by experimental means as a result of the lack of robust modeling procedures. In the present work, the authors discuss primary detrimental hurdles that have plagued effective modeling of AM methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles. The primary focus of this work encompasses the specific areas of high-performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.

  4. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  5. Power and Performance Management in Nonlinear Virtualized Computing Systems via Predictive Control

    PubMed Central

    Wen, Chengjian; Mu, Yifen

    2015-01-01

    The problem of power and performance management captures growing research interest in both academic and industrial field. Virtulization, as an advanced technology to conserve energy, has become basic architecture for most data centers. Accordingly, more sophisticated and finer control are desired in virtualized computing systems, where multiple types of control actions exist as well as time delay effect, which make it complicated to formulate and solve the problem. Furthermore, because of improvement on chips and reduction of idle power, power consumption in modern machines shows significant nonlinearity, making linear power models(which is commonly adopted in previous work) no longer suitable. To deal with this, we build a discrete system state model, in which all control actions and time delay effect are included by state transition and performance and power can be defined on each state. Then, we design the predictive controller, via which the quadratic cost function integrating performance and power can be dynamically optimized. Experiment results show the effectiveness of the controller. By choosing a moderate weight, a good balance can be achieved between performance and power: 99.76% requirements can be dealt with and power consumption can be saved by 33% comparing to the case with open loop controller. PMID:26225769

  6. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  7. Computation of the Mutual Inductance between Air-Cored Coils of Wireless Power Transformer

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    Wireless power transfer system is a modern technology which allows the transfer of electric power between the air-cored coils of its transformer via high frequency magnetic fields. However, due to its coil separation distance and misalignment, maximum power transfer is not guaranteed. Based on a more efficient and general model available in the literature, rederived mathematical models for evaluating the mutual inductance between circular coils with and without lateral and angular misalignment are presented. Rather than presenting results numerically, the computed results are graphically implemented using MATLAB codes. The results are compared with the published ones and clarification regarding the errors made are presented. In conclusion, this study shows that power transfer efficiency of the system can be improved if a higher frequency alternating current is supplied to the primary coil, the reactive parts of the coils are compensated with capacitors and ferrite cores are added to the coils.

  8. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  9. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... identification as Draft Regulatory Guide, DG-1208 on August 22, 2012 (77 FR 50722) for a 60-day public comment... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.''...

  10. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  11. Computer Literacy Act of 1984. Report together with Minority Views [and] Computer Literacy Act of 1983. Report together with Additional Views. To accompany H.R. 3750.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and Labor.

    These two reports contain supporting material intended to accompany the Computer Literacy Acts of 1983 and 1984 (H. R. 3750). This bill was designed to promote the use of computer technologies in elementary and secondary schools by authorizing: (1) grants to local school districts, particularly in poor areas, to purchase computer hardware; (2)…

  12. Accelerating the Gauss-Seidel Power Flow Solver on a High Performance Reconfigurable Computer

    SciTech Connect

    Byun, Jong-Ho; Ravindran, Arun; Mukherjee, Arindam; Joshi, Bharat; Chassin, David P.

    2009-09-01

    The computationally intensive power flow problem determines the voltage magnitude and phase angle at each bus in a power system for hundreds of thousands of buses under balanced three-phase steady-state conditions. We report an FPGA acceleration of the Gauss-Seidel based power flow solver employed in the transmission module of the GridLAB-D power distribution simulator and analysis tool. The prototype hardware is implemented on an SGI Altix-RASC system equipped with a Xilinx Virtex II 6000 FPGA. Due to capacity limitations of the FPGA, only the bus voltage calculations of the power network are implemented on hardware while the branch current calculations are implemented in software. For a 200,000 bus system, the bus voltage calculation on the FPGA achieves a 48x speed-up with PQ buses and a 62 times for PV over an equivalent sequential software implementation. The average overall speed up of the FPGA-CPU implementation with 100 iterations of the Gauss-Seidel power solver is 2.6x over a software implementation, with the branch calculations on the CPU accounting for 85% of the total execution time. The FPGA-CPU implementation also shows linear scaling with increase in the size of the input power network.

  13. Computer simulation of industrial power systems for improving plant design and energy management

    SciTech Connect

    Delfino, B.; Denegri, G.B.; Pinceti, P.

    1987-01-01

    The growing size and complexity of industrial power systems, plus the requirements of more and more reliable operation, particularly in continuous process plants, call for the utilization of structured approaches which make use of off-line computer programs both at design and control stages. As a general rule, the use of such computer programs has been restricted to the analysis of the load-flow and fault conditions without taking into account the dynamic behavior of the system. The aim of the paper is to introduce the dynamic simulation in industrial power system analysis and to point out the fall- out on the design and management of such plants. In particular, reference is made to a large steel plant, supplied from the electrical utility in connection with in-site generation; knowledge of dynamic performance of the system is shown to provide the engineer the essential information to optimize system protection and operating reliability.

  14. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-05-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.

  15. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  16. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  17. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems... revision endorses, with clarifications, the enhanced consensus practices for testing of computer...

  18. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... COMMISSION Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1207, ``Test Documentation for Digital Computer Software used in Safety Systems of... software and computer systems as described in the Institute of Electrical and Electronics Engineers...

  19. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    SciTech Connect

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  20. Stellar wind-magnetosphere interaction at exoplanets: computations of auroral radio powers

    NASA Astrophysics Data System (ADS)

    Nichols, J. D.; Milan, S. E.

    2016-09-01

    We present calculations of the auroral radio powers expected from exoplanets with magnetospheres driven by an Earth-like magnetospheric interaction with the solar wind. Specifically, we compute the twin cell-vortical ionospheric flows, currents, and resulting radio powers resulting from a Dungey cycle process driven by dayside and nightside magnetic reconnection, as a function of planetary orbital distance and magnetic field strength. We include saturation of the magnetospheric convection, as observed at the terrestrial magnetosphere, and we present power-law approximations for the convection potentials, radio powers and spectral flux densities. We specifically consider a solar-age system and a young (1 Gyr) system. We show that the radio power increases with magnetic field strength for magnetospheres with saturated convection potential, and broadly decreases with increasing orbital distance. We show that the magnetospheric convection at hot Jupiters will be saturated, and thus unable to dissipate the full available incident Poynting flux, such that the magnetic Radiometric Bode's Law (RBL) presents a substantial overestimation of the radio powers for hot Jupiters. Our radio powers for hot Jupiters are ˜5-1300 TW for hot Jupiters with field strengths of 0.1-10 BJ orbiting a Sun-like star, while we find that competing effects yield essentially identical powers for hot Jupiters orbiting a young Sun-like star. However, in particular, for planets with weaker magnetic fields, our powers are higher at larger orbital distances than given by the RBL, and there are many configurations of planet that are expected to be detectable using SKA.

  1. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  2. Integration of distributed plant process computer systems to nuclear power generation facilities

    SciTech Connect

    Bogard, T.; Finlay, K.

    1996-11-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation & control are evident from variations of design features.

  3. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  4. Building ceramics with an addition of pulverized combustion fly ash from the thermal power plant Nováky

    NASA Astrophysics Data System (ADS)

    Húlan, Tomáš; Trník, Anton; Medved, Igor; Štubňa, Igor; Kaljuvee, Tiit

    2016-07-01

    Pulverized combustion fly ash (PFA) from the Power plant Nováky (Slovakia) is analyzed for its potential use in the production of building ceramics. Three materials are used to prepare the mixtures: illite-rich clay (IRC), PFA and IRC fired at 1000 °C (called grog). The mixtures contain 60 % of IRC and 40 % of a non-plastic compound (grog or PFA). A various amount of the grog is replaced by PFA and the effect of this substitution is studied. Thermal analyses (TGA, DTA, thermodilatometry, and dynamical thermomechanical analysis) are used to analyze the processes occurring during firing. The flexural strength and thermal conductivity are determined at room temperature after firing in the temperature interval from 800 to 1100 °C. The results show that an addition of PFA slightly decreases the flexural strength. The thermal conductivity and porosity are practically unaffected by the presence of PFA. Thus, PFA from the Power plant Nováky is a convenient non-plastic component for manufacturing building ceramics.

  5. Optimal welding parameters for very high power ultrasonic additive manufacturing of smart structures with aluminum 6061 matrix

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.; Hehr, Adam; Dapino, Marcelo J.

    2014-03-01

    Ultrasonic additive manufacturing (UAM) is a recent solid state manufacturing process that combines ad- ditive joining of thin metal tapes with subtractive milling operations to generate near net shape metallic parts. Due to the minimal heating during the process, UAM is a proven method of embedding Ni-Ti, Fe-Ga, and PVDF to create active metal matrix composites. Recently, advances in the UAM process utilizing 9 kW very high power (VHP) welding has improved bonding properties, enabling joining of high strength materials previously unweldable with 1 kW low power UAM. Consequently, a design of experiments study was conducted to optimize welding conditions for aluminum 6061 components. This understanding is critical in the design of UAM parts containing smart materials. Build parameters, including weld force, weld speed, amplitude, and temperature were varied based on a Taguchi experimental design matrix and tested for me- chanical strength. Optimal weld parameters were identi ed with statistical methods including a generalized linear model for analysis of variance (ANOVA), mean e ects plots, and interaction e ects plots.

  6. Computer Assisted Fluid Power Instruction: A Comparison of Hands-On and Computer-Simulated Laboratory Experiences for Post-Secondary Students

    ERIC Educational Resources Information Center

    Wilson, Scott B.

    2005-01-01

    The primary purpose of this study was to examine the effectiveness of utilizing a combination of lecture and computer resources to train personnel to assume roles as hydraulic system technicians and specialists in the fluid power industry. This study compared computer simulated laboratory instruction to traditional hands-on laboratory instruction,…

  7. Measured energy savings and performance of power-managed personal computers and monitors

    SciTech Connect

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two other research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.

  8. The Effect of Emphasizing Mathematical Structure in the Acquisition of Whole Number Computation Skills (Addition and Subtraction) By Seven- and Eight-Year Olds: A Clinical Investigation.

    ERIC Educational Resources Information Center

    Uprichard, A. Edward; Collura, Carolyn

    This investigation sought to determine the effect of emphasizing mathematical structure in the acquisition of computational skills by seven- and eight-year-olds. The meaningful development-of-structure approach emphasized closure, commutativity, associativity, and the identity element of addition; the inverse relationship between addition and…

  9. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  10. A 10-kW SiC Inverter with A Novel Printed Metal Power Module With Integrated Cooling Using Additive Manufacturing

    SciTech Connect

    Chinthavali, Madhu Sudhan; Ayers, Curtis William; Campbell, Steven L; Wiles, Randy H; Ozpineci, Burak

    2014-01-01

    With efforts to reduce the cost, size, and thermal management systems for the power electronics drivetrain in hybrid electric vehicles (HEVs) and plug-in hybrid electric vehicles (PHEVs), wide band gap semiconductors including silicon carbide (SiC) have been identified as possibly being a partial solution. This paper focuses on the development of a 10-kW all SiC inverter using a high power density, integrated printed metal power module with integrated cooling using additive manufacturing techniques. This is the first ever heat sink printed for a power electronics application. About 50% of the inverter was built using additive manufacturing techniques.

  11. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Wen

    2016-06-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using two-tier items were conducted on Grade 4 ( n = 90) and Grade 5 ( n = 86) students, respectively. One-way ANCOVA was conducted to investigate whether the different assessment formats affected these students' posttest scores on both the phenomenon and reason tiers, and confidence rating for an answer was assessed to diagnose the nature of students' responses (i.e., scientific answer, guessing, alternative conceptions, or knowledge deficiency). Follow-up interview was adopted to explore whether and how the various CBA representations influenced both graders' responses. Results showed that the CBA, in particular the dynamic representation format, allowed students who lacked prior knowledge (Grade 4) to easily understand the question stems. The various CBA representations also potentially encouraged students who already had learning experience (Grade 5) to enhance the metacognitive judgment of their responses. Therefore, CBA could reduce students' use of test-taking strategies and provide better diagnostic power for a two-tier instrument than the traditional paper-based version.

  12. Computer-controlled flux monitoring using high-density, self-powered incore detectors

    SciTech Connect

    Crowe, R.D.; Samuel, T.J.

    1989-11-01

    The paper describes the integration of computer hardware and software to automate the operation of an advanced incore neutron monitoring system installed in the DOE's Hanford Site N Reactor. The system, which is unique in the nuclear industry, will use a system of networked computers to automatically record data from 640 self-powered detectors positioned throughout the reactor core. Selected detectors will be monitored at 10 times per second, providing real-time indication of the reactor condition for the operators and detailed core performance data for later offline analysis. The data acquisition computer coordinates the transfer of acquired data for graphic display of flux information, deconvolutes and converts detector currents to flux and power levels, and records and stores data from the movable detector system. This system, supplied by Babcock Wilcox, represents the present level of technology in reactor flux monitoring and associated data acquisition. It performs several competing tasks simultaneously requiring significant multitasking capability. The demands placed on the central processor by this flux monitoring application will be presented. 2 refs., 5 figs.

  13. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  14. Computational modeling of pulsed-power-driven magnetized target fusion experiments

    SciTech Connect

    Sheehey, P.; Kirkpatrick, R.; Lindemuth, I.

    1995-08-01

    Direct magnetic drive using electrical pulsed power has been considered impractically slow for traditional inertial confinement implosion of fusion targets. However, if the target contains a preheated, magnetized plasma, magnetothermal insulation may allow the near-adiabatic compression of such a target to fusion conditions on a much slower time scale. 100-MJ-class explosive flux compression generators with implosion kinetic energies far beyond those available with conventional fusion drivers, are an inexpensive means to investigate such magnetized target fusion (MTF) systems. One means of obtaining the preheated and magnetized plasma required for an MTF system is the recently reported {open_quotes}MAGO{close_quotes} concept. MAGO is a unique, explosive-pulsed-power driven discharge in two cylindrical chambers joined by an annular nozzle. Joint Russian-American MAGO experiments have reported D-T neutron yields in excess of 10{sup 13} from this plasma preparation stage alone, without going on to the proposed separately driven NM implosion of the main plasma chamber. Two-dimensional MED computational modeling of MAGO discharges shows good agreement to experiment. The calculations suggest that after the observed neutron pulse, a diffuse Z-pinch plasma with temperature in excess of 100 eV is created, which may be suitable for subsequent MTF implosion, in a heavy liner magnetically driven by explosive pulsed power. Other MTF concepts, such as fiber-initiated Z-pinch target plasmas, are also being computationally and theoretically evaluated. The status of our modeling efforts will be reported.

  15. A more efficient formulation for computation of the maximum loading points in electric power systems

    SciTech Connect

    Chiang, H.D.; Jean-Jumeau, R.

    1995-05-01

    This paper presents a more efficient formulation for computation of the maximum loading points. A distinguishing feature of the new formulation is that it is of dimension (n + 1), instead of the existing formulation of dimension (2n + 1), for n-dimensional load flow equations. This feature makes computation of the maximum loading points very inexpensive in comparison with those required in the existing formulation. A theoretical basis for the new formulation is provided. The new problem formulation is derived by using a simple reparameterization scheme and exploiting the special properties of the power flow model. Moreover, the proposed test function is shown to be monotonic in the vicinity of a maximum loading point. Therefore, it allows one to monitor the approach to maximum loading points during the solution search process. Simulation results on a 234-bus system are presented.

  16. Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Dress, D. A.

    1985-01-01

    A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.

  17. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  18. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results.

  19. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018

  20. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  1. Influences of Bi 2O 3 additive on the microstructure, permeability, and power loss characteristics of Ni-Zn ferrites

    NASA Astrophysics Data System (ADS)

    Su, Hua; Tang, Xiaoli; Zhang, Huaiwu; Jia, Lijun; Zhong, Zhiyong

    2009-10-01

    Nickel-zinc ferrite materials containing different Bi 2O 3 concentrations have been prepared by the conventional ceramic technique. Micrographs have clearly revealed that the Bi 2O 3 additive promoted grain growth. When the Bi 2O 3 content reached 0.15 wt%, a dual microstructure with both small grains (<5 μm) and some extremely large grains (>50 μm) appeared. With higher Bi 2O 3 content, the samples exhibited a very large average grain size of more than 30 μm. The initial permeability gradually decreased with increasing Bi 2O 3 content. When the Bi 2O 3 content exceeded 0.15 wt%, the permeability gradually decreased with frequency due to the low-frequency resonance induced by the large grain size. Neither the sintering density nor the saturation magnetization was obviously influenced by the Bi 2O 3 content or microstructure of the samples. However, power loss (Pcv) characteristics were evidently influenced. At low flux density, the sample with 0.10 wt% Bi 2O 3, which was characterized by an average grain size of 3-4 μm and few closed pores, displayed the lowest Pcv, irrespective of frequency. When the flux density was equal to or greater than the critical value of 40 mT, the sample with 0.20 wt% Bi 2O 3, which had the largest average grain size, displayed the lowest Pcv.

  2. Unraveling the Fundamental Mechanisms of Solvent-Additive-Induced Optimization of Power Conversion Efficiencies in Organic Photovoltaic Devices.

    PubMed

    Herath, Nuradhika; Das, Sanjib; Zhu, Jiahua; Kumar, Rajeev; Chen, Jihua; Xiao, Kai; Gu, Gong; Browning, James F; Sumpter, Bobby G; Ivanov, Ilia N; Lauter, Valeria

    2016-08-10

    The realization of controllable morphologies of bulk heterojunctions (BHJ) in organic photovoltaics (OPVs) is one of the key factors enabling high-efficiency devices. We provide new insights into the fundamental mechanisms essential for the optimization of power conversion efficiencies (PCEs) with additive processing to PBDTTT-CF:PC71BM system. We have studied the underlying mechanisms by monitoring the 3D nanostructural modifications in BHJs and correlated the modifications with the optical analysis and theoretical modeling of charge transport. Our results demonstrate profound effects of diiodooctane (DIO) on morphology and charge transport in the active layers. For small amounts of DIO (<3 vol %), DIO promotes the formation of a well-mixed donor-acceptor compact film and augments charge transfer and PCE. In contrast, for large amounts of DIO (>3 vol %), DIO facilitates a loosely packed mixed morphology with large clusters of PC71BM, leading to deterioration in PCE. Theoretical modeling of charge transport reveals that DIO increases the mobility of electrons and holes (the charge carriers) by affecting the energetic disorder and electric field dependence of the mobility. Our findings show the implications of phase separation and carrier transport pathways to achieve optimal device performances. PMID:27403964

  3. Unraveling the Fundamental Mechanisms of Solvent-Additive-Induced Optimization of Power Conversion Efficiencies in Organic Photovoltaic Devices.

    PubMed

    Herath, Nuradhika; Das, Sanjib; Zhu, Jiahua; Kumar, Rajeev; Chen, Jihua; Xiao, Kai; Gu, Gong; Browning, James F; Sumpter, Bobby G; Ivanov, Ilia N; Lauter, Valeria

    2016-08-10

    The realization of controllable morphologies of bulk heterojunctions (BHJ) in organic photovoltaics (OPVs) is one of the key factors enabling high-efficiency devices. We provide new insights into the fundamental mechanisms essential for the optimization of power conversion efficiencies (PCEs) with additive processing to PBDTTT-CF:PC71BM system. We have studied the underlying mechanisms by monitoring the 3D nanostructural modifications in BHJs and correlated the modifications with the optical analysis and theoretical modeling of charge transport. Our results demonstrate profound effects of diiodooctane (DIO) on morphology and charge transport in the active layers. For small amounts of DIO (<3 vol %), DIO promotes the formation of a well-mixed donor-acceptor compact film and augments charge transfer and PCE. In contrast, for large amounts of DIO (>3 vol %), DIO facilitates a loosely packed mixed morphology with large clusters of PC71BM, leading to deterioration in PCE. Theoretical modeling of charge transport reveals that DIO increases the mobility of electrons and holes (the charge carriers) by affecting the energetic disorder and electric field dependence of the mobility. Our findings show the implications of phase separation and carrier transport pathways to achieve optimal device performances.

  4. Power-law defect energy in a single-crystal gradient plasticity framework: a computational study

    NASA Astrophysics Data System (ADS)

    Bayerschen, E.; Böhlke, T.

    2016-07-01

    A single-crystal gradient plasticity model is presented that includes a power-law type defect energy depending on the gradient of an equivalent plastic strain. Numerical regularization for the case of vanishing gradients is employed in the finite element discretization of the theory. Three exemplary choices of the defect energy exponent are compared in finite element simulations of elastic-plastic tricrystals under tensile loading. The influence of the power-law exponent is discussed related to the distribution of gradients and in regard to size effects. In addition, an analytical solution is presented for the single slip case supporting the numerical results. The influence of the power-law exponent is contrasted to the influence of the normalization constant.

  5. Computer-controlled, variable-frequency power supply for driving multipole ion guides.

    PubMed

    Robbins, Matthew D; Yoon, Oh Kyu; Zuleta, Ignacio; Barbula, Griffin K; Zare, Richard N

    2008-03-01

    A high voltage, variable-frequency driver circuit for powering resonant multipole ion guides is presented. Two key features of this design are (1) the use of integrated circuits in the driver stage and (2) the use a stepper motor for tuning a large variable capacitor in the resonant stage. In the present configuration the available frequency range spans a factor of 2. The actual values of the minimum and maximum frequencies depend on the chosen inductor and the capacitance of the ion guide. Feedback allows for stabilized, computer-adjustable rf amplitudes over the range of 5-500 V. The rf power supply was characterized over the range of 350-750 kHz and evaluated by driving a quadrupole ion guide in an electrospray time-of-flight mass spectrometer.

  6. Assessment of computer codes for VVER-440/213-type nuclear power plants

    SciTech Connect

    Szabados, L.; Ezsol, Gy.; Perneczky

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  7. Computer-assisted high-speed balancing of T53 and T55 power turbines

    NASA Technical Reports Server (NTRS)

    Pojeta, T. J.; Walter, T. J.

    1979-01-01

    Standard overhaul procedures for U.S. Army helicopter engines require operational vibration acceptance testing after rebuild. Engines frequently experience vibrations which exceed allowable overhaul work requirement limits. The rework/retest cycle for these engines constitute a significant cost penalty to the overhaul center. This paper reviews both analytical and test data which indicate bending critical speeds within the operating speed range of the low-speed power turbine rotor as the cause of most test cell rejections. High-speed balancing techniques are applicable and are capable of significantly reducing this reject rate. A complete prototype computer-assisted high-speed balancing system for assembled T53 and T55 power turbine rotors is described.

  8. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  9. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-08-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  10. Computer program for thermodynamic analysis of open cycle multishaft power system with multiple reheat and intercool

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1974-01-01

    A computer program to analyze power systems having any number of shafts up to a maximum of five is presented. On each shaft there can be as many as five compressors and five turbines, along with any specified number of intervening intercoolers and reheaters. A recuperator can be included. Turbine coolant flow can be accounted for. Any fuel consisting entirely of hydrogen and/or carbon can be used. The program is valid for maximum temperatures up to about 2000 K (3600 R). The system description, the analysis method, a detailed explanation of program input and output including an illustrative example, a dictionary of program variables, and the program listing are explained.

  11. Computation of the power spectrum in chaotic ¼λφ{sup 4} inflation

    SciTech Connect

    Rojas, Clara; Villalba, Víctor M. E-mail: Victor.Villalba@monash.edu

    2012-01-01

    The phase-integral approximation devised by Fröman and Fröman, is used for computing cosmological perturbations in the quartic chaotic inflationary model. The phase-integral formulas for the scalar power spectrum are explicitly obtained up to fifth order of the phase-integral approximation. As in previous reports (Rojas 2007b, 2007c and 2009), we point out that the accuracy of the phase-integral approximation compares favorably with the numerical results and those obtained using the slow-roll and uniform approximation methods.

  12. Analysis and Design of Bridgeless Switched Mode Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, S.; Bhuvaneswari, G.; Singh, B.

    2014-09-01

    Switched mode power supplies (SMPSs) used in computers need multiple isolated and stiffly regulated output dc voltages with different current ratings. These isolated multiple output dc voltages are obtained by using a multi-winding high frequency transformer (HFT). A half-bridge dc-dc converter is used here for obtaining different isolated and well regulated dc voltages. In the front end, non-isolated Single Ended Primary Inductance Converters (SEPICs) are added to improve the power quality in terms of low input current harmonics and high power factor (PF). Two non-isolated SEPICs are connected in a way to completely eliminate the need of single-phase diode-bridge rectifier at the front end. Output dc voltages at both the non-isolated and isolated stages are controlled and regulated separately for power quality improvement. A voltage mode control approach is used in the non-isolated SEPIC stage for simple and effective control whereas average current control is used in the second isolated stage.

  13. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 2: SYSTID user's guide

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.

  14. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  15. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  16. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  17. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  18. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  19. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10‑13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  20. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.

    PubMed

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-01-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators. PMID:27457034

  1. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    PubMed Central

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-01-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10−13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators. PMID:27457034

  2. Direct Methanol Fuel Cell Power Supply For All-Day True Wireless Mobile Computing

    SciTech Connect

    Brian Wells

    2008-11-30

    PolyFuel has developed state-of-the-art portable fuel cell technology for the portable computing market. A novel approach to passive water recycling within the MEA has led to significant system simplification and size reduction. Miniature stack technology with very high area utilization and minimalist seals has been developed. A highly integrated balance of plant with very low parasitic losses has been constructed around the new stack design. Demonstration prototype systems integrated with laptop computers have been shown in recent months to leading OEM computer manufacturers. PolyFuel intends to provide this technology to its customers as a reference design as a means of accelerating the commercialization of portable fuel cell technology. The primary goal of the project was to match the energy density of a commercial lithium ion battery for laptop computers. PolyFuel made large strides against this goal and has now demonstrated 270 Wh/liter compared with lithium ion energy densities of 300 Wh/liter. Further, more incremental, improvements in energy density are envisioned with an additional 20-30% gains possible in each of the next two years given further research and development.

  3. Stochastic optimal control methods for investigating the power of morphological computation.

    PubMed

    Rückert, Elmar A; Neumann, Gerhard

    2013-01-01

    One key idea behind morphological computation is that many difficulties of a control problem can be absorbed by the morphology of a robot. The performance of the controlled system naturally depends on the control architecture and on the morphology of the robot. Because of this strong coupling, most of the impressive applications in morphological computation typically apply minimalistic control architectures. Ideally, adapting the morphology of the plant and optimizing the control law interact so that finally, optimal physical properties of the system and optimal control laws emerge. As a first step toward this vision, we apply optimal control methods for investigating the power of morphological computation. We use a probabilistic optimal control method to acquire control laws, given the current morphology. We show that by changing the morphology of our robot, control problems can be simplified, resulting in optimal controllers with reduced complexity and higher performance. This concept is evaluated on a compliant four-link model of a humanoid robot, which has to keep balance in the presence of external pushes. PMID:23186345

  4. Measurement of rotary pump flow and pressure by computation of driving motor power and speed.

    PubMed

    Qian, K X; Zeng, P; Ru, W M; Yuan, H Y; Feng, Z G; Li, L

    2000-01-01

    Measurement of pump flow and pressure by ventricular assist is an important process, but difficult to achieve. On one hand, the pump flow and pressure are indicators of pump performance and the physiologic status of the receptor, meanwhile providing a control basis of the blood pump itself. On the other hand, the direct measurement forces the receptor to connect with a flow meter and a manometer, and the sensors of these meters may cause haematological problems and increase the danger of infection. A novel method for measuring flow rate and pressure of rotary pump has been developed recently. First the pump performs at several rotating speeds, and at each speed the flow rate, pump head and the motor power (voltage x current) are recorded and shown in diagrams, thus obtaining P (motor power)-Q (pump volume) curves as well as P-H (pump head) curves. Secondly, the P, n (rotating speed) values are loaded into the input layer of a 3-layer BP (back propagation) neural network and the Q and H values into the output layer, to convert P-Q and P-H relations into Q = f (P,n) and H = g (P, n) functions. Thirdly, these functions are stored by computer to establish a database as an archive of this pump. Finally, the pump flow and pressure can be computed from motor power and speed during animal experiments or clinical trials. This new method was used in the authors' impeller pump. The results demonstrated that the error for pump head was less than 2% and that for pump flow was under 5%, so its accuracy is better than that of non-invasive measuring methods.

  5. Block diagonal decompositions for parallel computations of large power systems. Final report

    SciTech Connect

    Silijak, D.D.

    1995-05-01

    In this report we present the algorithm and C code for balanced bordered block diagonal (BBD) decompositions of large sparse matrices, as well as a variety of experimental results relating to the algorithm`s performance. The software has been tested on a number of large matrices, including models of the West Coast power network (1,993 x 1,993 matrix). The algorithm was found to compare very well with the symmetric minimal degree ordering in terms of sparsity preservation-in the test cases considered, the BBD decomposition produced only up to 15% more fill-in. This is more than satisfactory considering that BBD structures are far better suited for parallel computing than the scattered and unpredictable element patterns obtained by minimal degree ordering. For some denser matrices, the BBD decomposition was actually seen to produce lower fill-in than the minimal degree. In applications to power systems the execution time for the BBD decomposition was found to have a quadratic upper bound on its complexity, which is comparable to a number of other sparse matrix orderings. Simulation results indicate that the actual execution time is similar to the execution time of the symmetric minimal degree ordering in Matlab 4.0. The special structural advantages of balanced BBD decompositions have been utilized to parallelize the process of LU factorization. The speedups obtained with respect to solutions using symmetric minimal degree ordering on a single processor have confirmed the significant potential of BBD decomposition in parallel computing. For the 1,993 bus power system, a speedup of 11.2 times was obtained using 14 processors on a PVM 2.4.

  6. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    SciTech Connect

    Chika Nwankpa

    2011-01-31

    This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

  7. Computation and Experiment: A Powerful Combination to Understand and Predict Reactivities.

    PubMed

    Sperger, Theresa; Sanhueza, Italo A; Schoenebeck, Franziska

    2016-06-21

    Computational chemistry has become an established tool for the study of the origins of chemical phenomena and examination of molecular properties. Because of major advances in theory, hardware and software, calculations of molecular processes can nowadays be done with reasonable accuracy on a time-scale that is competitive or even faster than experiments. This overview will highlight broad applications of computational chemistry in the study of organic and organometallic reactivities, including catalytic (NHC-, Cu-, Pd-, Ni-catalyzed) and noncatalytic examples of relevance to organic synthesis. The selected examples showcase the ability of computational chemistry to rationalize and also predict reactivities of broad significance. A particular emphasis is placed on the synergistic interplay of computations and experiments. It is discussed how this approach allows one to (i) gain greater insight than the isolated techniques, (ii) inspire novel chemistry avenues, and (iii) assist in reaction development. Examples of successful rationalizations of reactivities are discussed, including the elucidation of mechanistic features (radical versus polar) and origins of stereoselectivity in NHC-catalyzed reactions as well as the rationalization of ligand effects on ligation states and selectivity in Pd- and Ni-catalyzed transformations. Beyond explaining, the synergistic interplay of computation and experiments is then discussed, showcasing the identification of the likely catalytically active species as a function of ligand, additive, and solvent in Pd-catalyzed cross-coupling reactions. These may vary between mono- or bisphosphine-bound or even anionic Pd complexes in polar media in the presence of coordinating additives. These fundamental studies also inspired avenues in catalysis via dinuclear Pd(I) cycles. Detailed mechanistic studies supporting the direct reactivity of Pd(I)-Pd(I) with aryl halides as well as applications of air-stable dinuclear Pd(I) catalysts are

  8. A general and accurate approach for computing the statistical power of the transmission disequilibrium test for complex disease genes.

    PubMed

    Chen, W M; Deng, H W

    2001-07-01

    Transmission disequilibrium test (TDT) is a nuclear family-based analysis that can test linkage in the presence of association. It has gained extensive attention in theoretical investigation and in practical application; in both cases, the accuracy and generality of the power computation of the TDT are crucial. Despite extensive investigations, previous approaches for computing the statistical power of the TDT are neither accurate nor general. In this paper, we develop a general and highly accurate approach to analytically compute the power of the TDT. We compare the results from our approach with those from several other recent papers, all against the results obtained from computer simulations. We show that the results computed from our approach are more accurate than or at least the same as those from other approaches. More importantly, our approach can handle various situations, which include (1) families that consist of one or more children and that have any configuration of affected and nonaffected sibs; (2) families ascertained through the affection status of parent(s); (3) any mixed sample with different types of families in (1) and (2); (4) the marker locus is not a disease susceptibility locus; and (5) existence of allelic heterogeneity. We implement this approach in a user-friendly computer program: TDT Power Calculator. Its applications are demonstrated. The approach and the program developed here should be significant for theoreticians to accurately investigate the statistical power of the TDT in various situations, and for empirical geneticists to plan efficient studies using the TDT.

  9. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  10. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  11. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  12. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  13. A supervisor for the successive 3D computations of magnetic, mechanical and acoustic quantities in power oil inductors and transformers

    SciTech Connect

    Reyne, G.; Magnin, H.; Berliat, G.; Clerc, C.

    1994-09-01

    A supervisor has been developed so as to allow successive 3D computations of different quantities by different softwares on the same physical problem. Noise of a given power oil transformer can be deduced from the surface vibrations of the tank. These vibrations are obtained through a mechanic computation whose Inputs are the electromagnetic forces provided . by an electromagnetic computation. Magnetic, mechanic and acoustic experimental data are compared with the results of the 3D computations. Stress Is put on the main characteristics of the supervisor such as the transfer of a given quantity from one mesh to the other.

  14. Computational fluid dynamics analysis of a steam power plant low-pressure turbine downward exhaust hood

    SciTech Connect

    Tindell, R.H.; Alston, T.M.; Sarro, C.A.; Stegmann, G.C.; Gray, L.; Davids, J.

    1996-01-01

    Computational fluid dynamics (CFD) methods are applied to the analysis of a low-pressure turbine exhaust hood at a typical steam power generating station. A Navier-Stokes solver, capable of modeling all the viscous terms, in a Reynolds-averaged formulation, was used. The work had two major goals. The first was to develop a comprehensive understanding of the complex three-dimensional flow fields that exist in the exhaust hood at representative operating conditions. The second was to evaluate the relative benefits of a flow guide modification to optimize performance at a selected operating condition. Also, the influence of simulated turbine discharge characteristics, relative to uniform hood entrance conditions, was evaluated. The calculations show several interesting and possibly unique results. They support use of an integrated approach to the design of turbine exhaust stage blading and hood geometry for optimum efficiency.

  15. Improved operating scenarios of the DIII-D tokamak as a result of the addition of UNIX computer systems

    SciTech Connect

    Henline, P.A.

    1995-10-01

    The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape control due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.

  16. Mechanistic and computational studies of the atom transfer radical addition of CCl4 to styrene catalyzed by copper homoscorpionate complexes.

    PubMed

    Muñoz-Molina, José María; Sameera, W M C; Álvarez, Eleuterio; Maseras, Feliu; Belderrain, Tomás R; Pérez, Pedro J

    2011-03-21

    Experimental as well as theoretical studies have been carried out with the aim of elucidating the mechanism of the atom transfer radical addition (ATRA) of styrene and carbon tetrachloride with a Tp(x)Cu(NCMe) complex as the catalyst precursor (Tp(x) = hydrotrispyrazolyl-borate ligand). The studies shown herein demonstrate the effect of different variables in the kinetic behavior. A mechanistic proposal consistent with theoretical and experimental data is presented.

  17. Computation of inflationary cosmological perturbations in the power-law inflationary model using the phase-integral method

    SciTech Connect

    Rojas, Clara; Villalba, Victor M.

    2007-03-15

    The phase-integral approximation devised by Froeman and Froeman, is used for computing cosmological perturbations in the power-law inflationary model. The phase-integral formulas for the scalar and tensor power spectra are explicitly obtained up to ninth-order of the phase-integral approximation. We show that, the phase-integral approximation exactly reproduces the shape of the power spectra for scalar and tensor perturbations as well as the spectral indices. We compare the accuracy of the phase-integral approximation with the results for the power spectrum obtained with the slow-roll and uniform-approximation methods.

  18. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  19. SAMPSON Parallel Computation for Sensitivity Analysis of TEPCO's Fukushima Daiichi Nuclear Power Plant Accident

    NASA Astrophysics Data System (ADS)

    Pellegrini, M.; Bautista Gomez, L.; Maruyama, N.; Naitoh, M.; Matsuoka, S.; Cappello, F.

    2014-06-01

    On March 11th 2011 a high magnitude earthquake and consequent tsunami struck the east coast of Japan, resulting in a nuclear accident unprecedented in time and extents. After scram started at all power stations affected by the earthquake, diesel generators began operation as designed until tsunami waves reached the power plants located on the east coast. This had a catastrophic impact on the availability of plant safety systems at TEPCO's Fukushima Daiichi, leading to the condition of station black-out from unit 1 to 3. In this article the accident scenario is studied with the SAMPSON code. SAMPSON is a severe accident computer code composed of hierarchical modules to account for the diverse physics involved in the various phases of the accident evolution. A preliminary parallelization analysis of the code was performed using state-of-the-art tools and we demonstrate how this work can be beneficial to the nuclear safety analysis. This paper shows that inter-module parallelization can reduce the time to solution by more than 20%. Furthermore, the parallel code was applied to a sensitivity study for the alternative water injection into TEPCO's Fukushima Daiichi unit 3. Results show that the core melting progression is extremely sensitive to the amount and timing of water injection, resulting in a high probability of partial core melting for unit 3.

  20. Computer image analysis: an additional tool for the identification of processed poultry and mammal protein containing bones.

    PubMed

    Pinotti, L; Fearn, T; Gulalp, S; Campagnoli, A; Ottoboni, M; Baldi, A; Cheli, F; Savoini, G; Dell'Orto, V

    2013-01-01

    The aims of this study were (1) to evaluate the potential of image analysis measurements, in combination with the official analytical methods for the detection of constituents of animal origin in feedstuffs, to distinguish between poultry versus mammals; and (2) to identify possible markers that can be used in routine analysis. For this purpose, 14 mammal and seven poultry samples and a total of 1081 bone fragment lacunae were analysed by combining the microscopic methods with computer image analysis. The distribution of 30 different measured size and shape bone lacunae variables were studied both within and between the two zoological classes. In all cases a considerable overlap between classes meant that classification of individual lacunae was problematic, though a clear separation in the means did allow successful classification of samples on the basis of averages. The variables most useful for classification were those related to size, lacuna area for example. The approach shows considerable promise but will need further study using a larger number of samples with a wider range.

  1. Additional value of computer assisted semen analysis (CASA) compared to conventional motility assessments in pig artificial insemination.

    PubMed

    Broekhuijse, M L W J; Soštarić, E; Feitsma, H; Gadella, B M

    2011-11-01

    In order to obtain a more standardised semen motility evaluation, Varkens KI Nederland has introduced a computer assisted semen analysis (CASA) system in all their pig AI laboratories. The repeatability of CASA was enhanced by standardising for: 1) an optimal sample temperature (39 °C); 2) an optimal dilution factor; 3) optimal mixing of semen and dilution buffer by using mechanical mixing; 4) the slide chamber depth, and together with the previous points; 5) the optimal training of technicians working with the CASA system; and 6) the use of a standard operating procedure (SOP). Once laboratory technicians were trained in using this SOP, they achieved a coefficient of variation of < 5% which was superior to the variation found when the SOP was not strictly used. Microscopic semen motility assessments by eye were subjective and not comparable to the data obtained by standardised CASA. CASA results are preferable as accurate continuous motility dates are generated rather than discrimination motility percentage increments of 10% motility as with motility estimation by laboratory technicians. The higher variability of sperm motility found with CASA and the continuous motility values allow better analysis of the relationship between semen motility characteristics and fertilising capacity. The benefits of standardised CASA for AI is discussed both with respect to estimate the correct dilution factor of the ejaculate for the production of artificial insemination (AI) doses (critical for reducing the number of sperm per AI doses) and thus to get more reliable fertility data from these AI doses in return.

  2. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  3. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  4. Application of computer artificial intelligence techniques to analyzing the status of typical utility electrical power plant systems

    SciTech Connect

    Nilsson, N.E.

    1989-03-01

    The capabilities of the computer have increased from data manipulation and computation to controlling industrial robots and assisting in heuristic consultations through the use of artificial techniques. This paper will describe the application of artificial intelligence (AI) techniques to a mature technology, specifically utility electrical power plant systems. The considerations inherent in proceeding with the deployment of AI techniques in the form of an Expert System will be presented and opportunities for improvements in this application will be discussed.

  5. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    SciTech Connect

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George; Holcomb, Matthew

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulation improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.

  6. Optimization of Acetylene Black Conductive Additive andPolyvinylidene Difluoride Composition for High Power RechargeableLithium-Ion Cells

    SciTech Connect

    Liu, G.; Zheng, H.; Battaglia, V.S.; Simens, A.S.; Minor, A.M.; Song, X.

    2007-07-01

    Fundamental electrochemical methods were applied to study the effect of the acetylene black (AB) and the polyvinylidene difluoride (PVDF) polymer binder on the performance of high-power designed rechargeable lithium ion cells. A systematic study of the AB/PVDF long-range electronic conductivity at different weight ratios is performed using four-probe direct current tests and the results reported. There is a wide range of AB/PVDF ratios that satisfy the long-range electronic conductivity requirement of the lithium-ion cathode electrode; however, a significant cell power performance improvement is observed at small AB/PVDF composition ratios that are far from the long-range conductivity optimum of 1 to 1.25. Electrochemical impedance spectroscopy (EIS) tests indicate that the interfacial impedance decreases significantly with increase in binder content. The hybrid power pulse characterization results agree with the EIS tests and also show improvement for cells with a high PVDF content. The AB to PVDF composition plays a significant role in the interfacial resistance. We believe the higher binder contents lead to a more cohesive conductive carbon particle network that results in better overall all local electronic conductivity on the active material surface and hence reduced charge transfer impedance.

  7. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  8. Microstructure and properties of the low-power-laser clad coatings on magnesium alloy with different amount of rare earth addition

    NASA Astrophysics Data System (ADS)

    Zhu, Rundong; Li, Zhiyong; Li, Xiaoxi; Sun, Qi

    2015-10-01

    Due to the low-melting-point and high evaporation rate of magnesium at elevated temperature, high power laser clad coating on magnesium always causes subsidence and deterioration in the surface. Low power laser can reduce the evaporation effect while brings problems such as decreased thickness, incomplete fusion and unsatisfied performance. Therefore, low power laser with selected parameters was used in our research work to obtain Al-Cu coatings with Y2O3 addition on AZ91D magnesium alloy. The addition of Y2O3 obviously increases thickness of the coating and improves the melting efficiency. Furthermore, the effect of Y2O3 addition on the microstructure of laser clad Al-Cu coatings was investigated by scanning electron microscopy. The energy-dispersive spectrometer (EDS) and X-ray diffractometer (XRD) were used to examine the elemental and phase compositions of the coatings. The properties were investigated by micro-hardness test, dry wear test and electrochemical corrosion. It was found that the addition of Y2O3 refined the microstructure. The micro-hardness, abrasion resistance and corrosion resistance of the coatings was greatly improved compared with the magnesium matrix, especially for the Al-Cu coating with Y2O3 addition.

  9. 1,4-Addition of bis(iodozincio)methane to α,β-unsaturated ketones: chemical and theoretical/computational studies.

    PubMed

    Sada, Mutsumi; Furuyama, Taniyuki; Komagawa, Shinsuke; Uchiyama, Masanobu; Matsubara, Seijiro

    2010-09-10

    1,4-Addition of bis(iodozincio)methane to simple α,β-unsaturated ketones does not proceed well; the reaction is slightly endothermic according to DFT calculations. In the presence of chlorotrimethylsilane, the reaction proceeded efficiently to afford a silyl enol ether of β-zinciomethyl ketone. The C--Zn bond of the silyl enol ether could be used in a cross-coupling reaction to form another C--C bond in a one-pot reaction. In contrast, 1,4-addition of the dizinc reagent to enones carrying an acyloxy group proceeded very efficiently without any additive. In this case, the product was a 1,3-diketone, which was generated in a novel tandem reaction. A theoretical/computational study indicates that the whole reaction pathway is exothermic, and that two zinc atoms of bis(iodozincio)methane accelerate each step cooperatively as effective Lewis acids. PMID:20645344

  10. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  11. Digital computer study of nuclear reactor thermal transients during startup of 60-kWe Brayton power conversion system

    NASA Technical Reports Server (NTRS)

    Jefferies, K. S.; Tew, R. C.

    1974-01-01

    A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.

  12. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  13. Technical basis for environmental qualification of computer-based safety systems in nuclear power plants

    SciTech Connect

    Korsah, K.; Wood, R.T.; Tanaka, T.J.; Antonescu, C.E.

    1997-10-01

    This paper summarizes the results of research sponsored by the US Nuclear Regulatory Commission (NRC) to provide the technical basis for environmental qualification of computer-based safety equipment in nuclear power plants. This research was conducted by the Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL). ORNL investigated potential failure modes and vulnerabilities of microprocessor-based technologies to environmental stressors, including electromagnetic/radio-frequency interference, temperature, humidity, and smoke exposure. An experimental digital safety channel (EDSC) was constructed for the tests. SNL performed smoke exposure tests on digital components and circuit boards to determine failure mechanisms and the effect of different packaging techniques on smoke susceptibility. These studies are expected to provide recommendations for environmental qualification of digital safety systems by addressing the following: (1) adequacy of the present preferred test methods for qualification of digital I and C systems; (2) preferred standards; (3) recommended stressors to be included in the qualification process during type testing; (4) resolution of need for accelerated aging in qualification testing for equipment that is to be located in mild environments; and (5) determination of an appropriate approach to address smoke in a qualification program.

  14. IMES-Ural: the system of the computer programs for operational analysis of power flow distribution using telemetric data

    SciTech Connect

    Bogdanov, V.A.; Bol'shchikov, A.A.; Zifferman, E.O.

    1981-02-01

    A system of computer programs was described which enabled the user to perform real-time calculation and analysis of the current flow in the 500 kV network of the Ural Regional Electric Power Plant for all possible variations of the network, based on teleinformation and correctable equivalent parameters of the 220 to 110 kV network.

  15. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  16. Enhanced computational prediction of polyethylene wear in hip joints by incorporating cross-shear and contact pressure in additional to load and sliding distance: effect of head diameter.

    PubMed

    Kang, Lu; Galvin, Alison L; Fisher, John; Jin, Zhongmin

    2009-05-11

    A new definition of the experimental wear factor was established and reported as a function of cross-shear motion and contact pressure using a multi-directional pin-on-plate wear testing machine for conventional polyethylene in the present study. An independent computational wear model was developed by incorporating the cross-shear motion and contact pressure-dependent wear factor into the Archard's law, in additional to load and sliding distance. The computational prediction of wear volume was directly compared with a simulator testing of a polyethylene hip joint with a 28 mm diameter. The effect of increasing the femoral head size was subsequently considered and was shown to increase wear, as a result of increased sliding distance and reduced contact pressure. PMID:19261286

  17. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  18. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1984-01-01

    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  19. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

    SciTech Connect

    Jaffe, L. D.

    1984-02-15

    CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

  20. The role of additional computed tomography in the decision-making process on the secondary prevention in patients after systemic cerebral thrombolysis

    PubMed Central

    Sobolewski, Piotr; Kozera, Grzegorz; Szczuchniak, Wiktor; Nyka, Walenty M

    2016-01-01

    Introduction Patients with ischemic stroke undergoing intravenous (iv)-thrombolysis are routinely controlled with computed tomography on the second day to assess stroke evolution and hemorrhagic transformation (HT). However, the benefits of an additional computed tomography (aCT) performed over the next days after iv-thrombolysis have not been determined. Methods We retrospectively screened 287 Caucasian patients with ischemic stroke who were consecutively treated with iv-thrombolysis from 2008 to 2012. The results of computed tomography performed on the second (control computed tomography) and seventh (aCT) day after iv-thrombolysis were compared in 274 patients (95.5%); 13 subjects (4.5%), who died before the seventh day from admission were excluded from the analysis. Results aCTs revealed a higher incidence of HT than control computed tomographies (14.2% vs 6.6%; P=0.003). Patients with HT in aCT showed higher median of National Institutes of Health Stroke Scale score on admission than those without HT (13.0 vs 10.0; P=0.01) and higher presence of ischemic changes >1/3 middle cerebral artery territory (66.7% vs 35.2%; P<0.01). Correlations between presence of HT in aCT and National Institutes of Health Stroke Scale score on admission (rpbi 0.15; P<0.01), and the ischemic changes >1/3 middle cerebral artery (phi=0.03) existed, and the presence of HT in aCT was associated with 3-month mortality (phi=0.03). Conclusion aCT after iv-thrombolysis enables higher detection of HT, which is related to higher 3-month mortality. Thus, patients with severe middle cerebral artery infarction may benefit from aCT in the decision-making process on the secondary prophylaxis. PMID:26730196

  1. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    SciTech Connect

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  2. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  3. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  4. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  5. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  6. First-order electroweak phase transition powered by additional F-term loop effects in an extended supersymmetric Higgs sector

    NASA Astrophysics Data System (ADS)

    Kanemura, Shinya; Senaha, Eibun; Shindou, Tetsuo

    2011-11-01

    We investigate the one-loop effect of new charged scalar bosons on the Higgs potential at finite temperatures in the supersymmetric standard model with four Higgs doublet chiral superfields as well as a pair of charged singlet chiral superfields. In this model, the mass of the lightest Higgs boson h is determined only by the D-term in the Higgs potential at the tree-level, while the triple Higgs boson coupling for hhh can receive a significant radiative correction due to nondecoupling one-loop contributions of the additional charged scalar bosons. We find that the same nondecoupling mechanism can also contribute to realize stronger first order electroweak phase transition than that in the minimal supersymmetric standard model, which is definitely required for a successful scenario of electroweak baryogenesis. Therefore, this model can be a new candidate for a model in which the baryon asymmetry of the Universe is explained at the electroweak scale.

  7. XOQDOQ: computer program for the meteorological evaluation of routine effluent releases at nuclear power stations. Final report

    SciTech Connect

    Sagendorf, J.F.; Goll, J.T.; Sandusky, W.F.

    1982-09-01

    Provided is a user's guide for the US Nuclear Regulatory Commission's (NRC) computer program X0QDOQ which implements Regulatory Guide 1.111. This NUREG supercedes NUREG-0324 which was published as a draft in September 1977. This program is used by the NRC meteorology staff in their independent meteorological evaluation of routine or anticipated intermittent releases at nuclear power stations. It operates in a batch input mode and has various options a user may select. Relative atmospheric dispersion and deposition factors are computed for 22 specific distances out to 50 miles from the site for each directional sector. From these results, values for 10 distance segments are computed. The user may also select other locations for which atmospheric dispersion deposition factors are computed. Program features, including required input data and output results, are described. A program listing and test case data input and resulting output are provided.

  8. Spin orbit torque driven magnetic switching for low power computing and memory

    NASA Astrophysics Data System (ADS)

    Bhowmik, Debanjan

    Spintronics has rapidly emerged as a highly pursued research area in solid-state physics and devices owing to its potential application in low power memory and logic as well as the rich physics associated with it. Traditionally in spintronics, spin transfer torque in magnetic tunnel junctions and spin valves has been used to manipulate ferromagnets. Spin orbit torque has recently emerged as an alternative mechanism for manipulating such ferromagnets, which offers advantages of lower energy consumption, simpler device structure, etc. For a ferromagnet-heavy metal bilayer, electrons flowing through the heavy metal separate based on the direction of their spin. This results in the accumulation of spin polarized electrons at the interface, which in turn applies a torque, known as spin orbit torque, on the ferromagnet. A typical such heavy metal is tantalum (Ta) and typical such ferromagnet is CoFeB. The research presented in this dissertation shows how in a perpendicularly polarized Ta/CoFeB/MgO heterostructure, spin orbit torque at the interface of the Ta and CoFeB layers can be used to manipulate the magnetic moments of the CoFeB layer for low power memory and logic applications. The main results presented in this dissertation are fourfold. First, we report experiments showing spin orbit torque driven magnetic switching in a perpendicularly polarized Ta/CoFeB/MgO heterostructure and explain the microscopic mechanism of the switching. Using that microscopic mechanism, we show a new kind of ferromagnetic domain wall motion. Traditionally a ferromagnetic domain wall is known to flow parallel or antiparallel to the direction of the current, but here we show that spin orbit torque, owing to its unique symmetry, can be used to move the domain wall orthogonal to the current direction. Second, we experimentally demonstrate the application of this spin orbit torque driven switching in nanomagnetic logic, which is a low power alternative to CMOS based computing. Previous

  9. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster

    PubMed Central

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control. PMID:26053271

  10. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster.

    PubMed

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control. PMID:26053271

  11. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    SciTech Connect

    Mayhall, D J; Stein, W; Gronberg, J B

    2006-05-15

    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  12. Introducing a computer program devoted to renewable integration assessment of multi-field solar photovoltaic power plants

    SciTech Connect

    Gil, M.A.C.; Arroba, J.P.; Ibanez, J.C.; Criado, J.A.R.

    1996-11-01

    The objectives of this paper are to present a computer program devoted to the simulation of solar photovoltaic power plants, namely the assessment of their power generation technical potential. The most general configuration of a former program devoted to single-field photovoltaic generators has been extended and updated to multi-field systems. This program is also intended to provide capabilities in order to assess the integration of renewable energy resources. Mainly solar and wind energy systems will be considered, as well as pumped-storage stations, of which an example is included.

  13. Impact of high microwave power on hydrogen impurity trapping in nanocrystalline diamond films grown with simultaneous nitrogen and oxygen addition into methane/hydrogen plasma

    NASA Astrophysics Data System (ADS)

    Tang, C. J.; Fernandes, A. J. S.; Jiang, X. F.; Pinto, J. L.; Ye, H.

    2016-01-01

    In this work, we study for the first time the influence of microwave power higher than 2.0 kW on bonded hydrogen impurity incorporation (form and content) in nanocrystalline diamond (NCD) films grown in a 5 kW MPCVD reactor. The NCD samples of different thickness ranging from 25 to 205 μm were obtained through a small amount of simultaneous nitrogen and oxygen addition into conventional about 4% methane in hydrogen reactants by keeping the other operating parameters in the same range as that typically used for the growth of large-grained polycrystalline diamond films. Specific hydrogen point defect in the NCD films is analyzed by using Fourier-transform infrared (FTIR) spectroscopy. When the other operating parameters are kept constant (mainly the input gases), with increasing of microwave power from 2.0 to 3.2 kW (the pressure was increased slightly in order to stabilize the plasma ball of the same size), which simultaneously resulting in the rise of substrate temperature more than 100 °C, the growth rate of the NCD films increases one order of magnitude from 0.3 to 3.0 μm/h, while the content of hydrogen impurity trapped in the NCD films during the growth process decreases with power. It has also been found that a new H related infrared absorption peak appears at 2834 cm-1 in the NCD films grown with a small amount of nitrogen and oxygen addition at power higher than 2.0 kW and increases with power higher than 3.0 kW. According to these new experimental results, the role of high microwave power on diamond growth and hydrogen impurity incorporation is discussed based on the standard growth mechanism of CVD diamonds using CH4/H2 gas mixtures. Our current experimental findings shed light into the incorporation mechanism of hydrogen impurity in NCD films grown with a small amount of nitrogen and oxygen addition into methane/hydrogen plasma.

  14. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  15. Impact of the flame retardant additive triphenyl phosphate (TPP) on the performance of graphite/LiFePO4 cells in high power applications

    NASA Astrophysics Data System (ADS)

    Ciosek Högström, Katarzyna; Lundgren, Henrik; Wilken, Susanne; Zavalis, Tommy G.; Behm, Mårten; Edström, Kristina; Jacobsson, Per; Johansson, Patrik; Lindbergh, Göran

    2014-06-01

    This study presents an extensive characterization of a standard Li-ion battery (LiB) electrolyte containing different concentrations of the flame retardant triphenyl phosphate (TPP) in the context of high power applications. Electrolyte characterization shows only a minor decrease in the electrolyte flammability for low TPP concentrations. The addition of TPP to the electrolyte leads to increased viscosity and decreased conductivity. The solvation of the lithium ion charge carriers seem to be directly affected by the TPP addition - as evidenced by Raman spectroscopy and increased mass-transport resistivity. Graphite/LiFePO4 full cell tests show the energy efficiency to decrease with the addition of TPP. Specifically, diffusion resistivity is observed to be the main source of increased losses. Furthermore, TPP influences the interface chemistry on both the positive and the negative electrode. Higher concentrations of TPP lead to thicker interface layers on LiFePO4. Even though TPP is not electrochemically reduced on graphite, it does participate in SEI formation. TPP cannot be considered a suitable flame retardant for high power applications as there is only a minor impact of TPP on the flammability of the electrolyte for low concentrations of TPP, and a significant increase in polarization is observed for higher concentrations of TPP.

  16. High SO{sub 2} removal efficiency testing: Results of DBA and sodium formate additive tests at Southwestern Electric Power company`s Pirkey Station

    SciTech Connect

    1996-05-30

    Tests were conducted at Southwestern Electric Power Company`s (SWEPCo) Henry W. Pirkey Station wet limestone flue gas desulfurization (FGD) system to evaluate options for achieving high sulfur dioxide removal efficiency. The Pirkey FGD system includes four absorber modules, each with dual slurry recirculation loops and with a perforated plate tray in the upper loop. The options tested involved the use of dibasic acid (DBA) or sodium formate as a performance additive. The effectiveness of other potential options was simulated with the Electric Power Research Institute`s (EPRI) FGD PRocess Integration and Simulation Model (FGDPRISM) after it was calibrated to the system. An economic analysis was done to determine the cost effectiveness of the high-efficiency options. Results are-summarized below.

  17. Synthesis of Bridged Heterocycles via Sequential 1,4- and 1,2-Addition Reactions to α,β-Unsaturated N-Acyliminium Ions: Mechanistic and Computational Studies.

    PubMed

    Yazici, Arife; Wille, Uta; Pyne, Stephen G

    2016-02-19

    Novel tricyclic bridged heterocyclic systems can be readily prepared from sequential 1,4- and 1,2-addition reactions of allyl and 3-substituted allylsilanes to indolizidine and quinolizidine α,β-unsaturated N-acyliminium ions. These reactions involve a novel N-assisted, transannular 1,5-hydride shift. Such a mechanism was supported by examining the reaction of a dideuterated indolizidine, α,β-unsaturated N-acyliminium ion precursor, which provided specifically dideuterated tricyclic bridged heterocyclic products, and from computational studies. In contrast, the corresponding pyrrolo[1,2-a]azepine system did not provide the corresponding tricyclic bridged heterocyclic product and gave only a bis-allyl adduct, while more substituted versions gave novel furo[3,2-d]pyrrolo[1,2-a]azepine products. Such heterocyclic systems would be expected to be useful scaffolds for the preparation of libraries of novel compounds for new drug discovery programs. PMID:26816207

  18. Synthesis of Bridged Heterocycles via Sequential 1,4- and 1,2-Addition Reactions to α,β-Unsaturated N-Acyliminium Ions: Mechanistic and Computational Studies.

    PubMed

    Yazici, Arife; Wille, Uta; Pyne, Stephen G

    2016-02-19

    Novel tricyclic bridged heterocyclic systems can be readily prepared from sequential 1,4- and 1,2-addition reactions of allyl and 3-substituted allylsilanes to indolizidine and quinolizidine α,β-unsaturated N-acyliminium ions. These reactions involve a novel N-assisted, transannular 1,5-hydride shift. Such a mechanism was supported by examining the reaction of a dideuterated indolizidine, α,β-unsaturated N-acyliminium ion precursor, which provided specifically dideuterated tricyclic bridged heterocyclic products, and from computational studies. In contrast, the corresponding pyrrolo[1,2-a]azepine system did not provide the corresponding tricyclic bridged heterocyclic product and gave only a bis-allyl adduct, while more substituted versions gave novel furo[3,2-d]pyrrolo[1,2-a]azepine products. Such heterocyclic systems would be expected to be useful scaffolds for the preparation of libraries of novel compounds for new drug discovery programs.

  19. Neuro-Fuzzy Computational Technique to Control Load Frequency in Hydro-Thermal Interconnected Power System

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Sinha, S. K.

    2015-09-01

    In this research work, two areas hydro-thermal power system connected through tie-lines is considered. The perturbation of frequencies at the areas and resulting tie line power flows arise due to unpredictable load variations that cause mismatch between the generated and demanded powers. Due to rising and falling power demand, the real and reactive power balance is harmed; hence frequency and voltage get deviated from nominal value. This necessitates designing of an accurate and fast controller to maintain the system parameters at nominal value. The main purpose of system generation control is to balance the system generation against the load and losses so that the desired frequency and power interchange between neighboring systems are maintained. The intelligent controllers like fuzzy logic, artificial neural network (ANN) and hybrid fuzzy neural network approaches are used for automatic generation control for the two area interconnected power systems. Area 1 consists of thermal reheat power plant whereas area 2 consists of hydro power plant with electric governor. Performance evaluation is carried out by using intelligent (ANFIS, ANN and fuzzy) control and conventional PI and PID control approaches. To enhance the performance of controller sliding surface i.e. variable structure control is included. The model of interconnected power system has been developed with all five types of said controllers and simulated using MATLAB/SIMULINK package. The performance of the intelligent controllers has been compared with the conventional PI and PID controllers for the interconnected power system. A comparison of ANFIS, ANN, Fuzzy and PI, PID based approaches shows the superiority of proposed ANFIS over ANN, fuzzy and PI, PID. Thus the hybrid fuzzy neural network controller has better dynamic response i.e., quick in operation, reduced error magnitude and minimized frequency transients.

  20. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  1. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  2. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State

    NASA Astrophysics Data System (ADS)

    Stoop, Ruedi; Gomez, Florian

    2016-07-01

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  3. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State.

    PubMed

    Stoop, Ruedi; Gomez, Florian

    2016-07-15

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  4. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State.

    PubMed

    Stoop, Ruedi; Gomez, Florian

    2016-07-15

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information. PMID:27472144

  5. Light-weight single trial EEG signal processing algorithms: computational profiling for low power design.

    PubMed

    Ahmadi, Ali; Jafari, Roozbeh; Hart, John

    2011-01-01

    Brain Computer Interface (BCI) systems translate brain rhythms into signals comprehensible by computers. BCI has numerous applications in the clinical domain, the computer gaming, and the military. Real-time analysis of single trial brain signals is a challenging task, due to the low SNR of the incoming signals, added noise due to muscle artifacts, and trial-to-trial variability. In this work we present a computationally lightweight classification method based on several time and frequency domain features. After preprocessing and filtering, wavelet transform and Short Time Fourier Transform (STFT) are used for feature extraction. Feature vectors which are extracted from θ and α frequency bands are classified using a Support Vector Machine (SVM) classifier. EEG data were recorded from 64 electrodes during three different Go/NoGo tasks. We achieved 91% classification accuracy for two-class discrimination. The high recognition rate and low computational complexity makes this approach a promising method for a BCI system running on wearable and mobile devices. Computational profiling shows that this method is suitable for real time signal processing implementation. PMID:22255321

  6. Light-weight single trial EEG signal processing algorithms: computational profiling for low power design.

    PubMed

    Ahmadi, Ali; Jafari, Roozbeh; Hart, John

    2011-01-01

    Brain Computer Interface (BCI) systems translate brain rhythms into signals comprehensible by computers. BCI has numerous applications in the clinical domain, the computer gaming, and the military. Real-time analysis of single trial brain signals is a challenging task, due to the low SNR of the incoming signals, added noise due to muscle artifacts, and trial-to-trial variability. In this work we present a computationally lightweight classification method based on several time and frequency domain features. After preprocessing and filtering, wavelet transform and Short Time Fourier Transform (STFT) are used for feature extraction. Feature vectors which are extracted from θ and α frequency bands are classified using a Support Vector Machine (SVM) classifier. EEG data were recorded from 64 electrodes during three different Go/NoGo tasks. We achieved 91% classification accuracy for two-class discrimination. The high recognition rate and low computational complexity makes this approach a promising method for a BCI system running on wearable and mobile devices. Computational profiling shows that this method is suitable for real time signal processing implementation.

  7. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  8. A power-efficient communication system between brain-implantable devices and external computers.

    PubMed

    Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui

    2007-01-01

    In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.

  9. Computational fluid dynamics study on mixing mode and power consumption in anaerobic mono- and co-digestion.

    PubMed

    Zhang, Yuan; Yu, Guangren; Yu, Liang; Siddhu, Muhammad Abdul Hanan; Gao, Mengjiao; Abdeltawab, Ahmed A; Al-Deyab, Salem S; Chen, Xiaochun

    2016-03-01

    Computational fluid dynamics (CFD) was applied to investigate mixing mode and power consumption in anaerobic mono- and co-digestion. Cattle manure (CM) and corn stover (CS) were used as feedstock and stirred tank reactor (STR) was used as digester. Power numbers obtained by the CFD simulation were compared with those from the experimental correlation. Results showed that the standard k-ε model was more appropriate than other turbulence models. A new index, net power production instead of gas production, was proposed to optimize feedstock ratio for anaerobic co-digestion. Results showed that flow field and power consumption were significantly changed in co-digestion of CM and CS compared with those in mono-digestion of either CM or CS. For different mixing modes, the optimum feedstock ratio for co-digestion changed with net power production. The best option of CM/CS ratio for continuous mixing, intermittent mixing I, and intermittent mixing II were 1:1, 1:1 and 1:3, respectively. PMID:26722816

  10. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  11. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  12. Reliability improvements of the Guri Hydroelectric Power Plant computer control system AGC and AVC

    SciTech Connect

    Castro, F.; Pescina, M. ); Llort, G. )

    1992-09-01

    This paper describes the computer control system of a large hydroelectric powerplant and the reliability improvements made to the automatic generation control (AGC) and automatic voltage control (AVC) programs. hardware and software modifications were required to improve the interface between the powerplant and the regional load dispatch office. These modifications, and their impact on the AGC and AVC reliability, are also discussed. The changes that have been implemented are recommended for inclusion in new powerplant computer control systems, and as an upgrade feature for existing control systems.

  13. A computer controlled power tool for the servicing of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Richards, Paul W.; Konkel, Carl; Smith, Chris; Brown, Lee; Wagner, Ken

    1996-01-01

    The Hubble Space Telescope (HST) Pistol Grip Tool (PGT) is a self-contained, microprocessor controlled, battery-powered, 3/8-inch-drive hand-held tool. The PGT is also a non-powered ratchet wrench. This tool will be used by astronauts during Extravehicular Activity (EVA) to apply torque to the HST and HST Servicing Support Equipment mechanical interfaces and fasteners. Numerous torque, speed, and turn or angle limits are programmed into the PGT for use during various missions. Batteries are replaceable during ground operations, Intravehicular Activities, and EVA's.

  14. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  15. Control algorithms and computer simulation of a stand-alone photovoltaic village power system

    NASA Technical Reports Server (NTRS)

    Groumpos, P. P.; Culler, J. E.; Delombard, R.; Ratajczak, A. F.; Cull, R.

    1984-01-01

    At Stand-Alone Photovoltaic (SAPV) power systems increase in size and load diversity, the design and simulation of control subsystems takes on added importance. These SAPV systems represent 'mini utilities' with commensurate controls requirements, albeit with the added complexity of the energy source (sunlight received) being an uncontrollable variable. This paper briefly describes a stand-alone photovoltaic power/load system computerized simulation model. The model was tested against operational data from the Schuchuli stand-alone village photovoltaic system and has achieved acceptable levels of simulation accuracy. The model can be used to simulate system designs although with probable battery modification.

  16. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  17. A computational study of the addition of ReO3L (L = Cl(-), CH3, OCH3 and Cp) to ethenone.

    PubMed

    Aniagyei, Albert; Tia, Richard; Adei, Evans

    2016-01-01

    The periselectivity and chemoselectivity of the addition of transition metal oxides of the type ReO3L (L = Cl, CH3, OCH3 and Cp) to ethenone have been explored at the MO6 and B3LYP/LACVP* levels of theory. The activation barriers and reaction energies for the stepwise and concerted addition pathways involving multiple spin states have been computed. In the reaction of ReO3L (L = Cl(-), OCH3, CH3 and Cp) with ethenone, the concerted [2 + 2] addition of the metal oxide across the C=C and C=O double bond to form either metalla-2-oxetane-3-one or metalla-2,4-dioxolane is the most kinetically favored over the formation of metalla-2,5-dioxolane-3-one from the direct [3 + 2] addition pathway. The trends in activation and reaction energies for the formation of metalla-2-oxetane-3-one and metalla-2,4-dioxolane are Cp < Cl(-) < OCH3 < CH3 and Cp < OCH3 < CH3 < Cl(-) and for the reaction energies are Cp < OCH3 < Cl(-) < CH3 and Cp < CH3 < OCH3 < Cl CH3. The concerted [3 + 2] addition of the metal oxide across the C=C double of the ethenone to form species metalla-2,5-dioxolane-3-one is thermodynamically the most favored for the ligand L = Cp. The direct [2 + 2] addition pathways leading to the formations of metalla-2-oxetane-3-one and metalla-2,4-dioxolane is thermodynamically the most favored for the ligands L = OCH3 and Cl(-). The difference between the calculated [2 + 2] activation barriers for the addition of the metal oxide LReO3 across the C=C and C=O functionalities of ethenone are small except for the case of L = Cl(-) and OCH3. The rearrangement of the metalla-2-oxetane-3-one-metalla-2,5-dioxolane-3-one even though feasible, are unfavorable due to high activation energies of their rate-determining steps. For the rearrangement of the metalla-2-oxetane-3-one to metalla-2,5-dioxolane-3-one, the trends in activation barriers is found to follow the order OCH3 < Cl(-) < CH3 < Cp. The trends in the activation energies for

  18. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  19. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation

    PubMed Central

    Loewe, Axel; Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538

  20. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    PubMed

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  1. Tandem β-elimination/hetero-michael addition rearrangement of an N-alkylated pyridinium oxime to an O-alkylated pyridine oxime ether: an experimental and computational study.

    PubMed

    Picek, Igor; Vianello, Robert; Šket, Primož; Plavec, Janez; Foretić, Blaženka

    2015-02-20

    A novel OH(-)-promoted tandem reaction involving C(β)-N(+)(pyridinium) cleavage and ether C(β)-O(oxime) bond formation in aqueous media has been presented. The study fully elucidates the fascinating reaction behavior of N-benzoylethylpyridinium-4-oxime chloride in aqueous media under mild reaction conditions. The reaction journey begins with the exclusive β-elimination and formation of pyridine-4-oxime and phenyl vinyl ketone and ends with the formation of O-alkylated pyridine oxime ether. A combination of experimental and computational studies enabled the introduction of a new type of rearrangement process that involves a unique tandem reaction sequence. We showed that (E)-O-benzoylethylpyridine-4-oxime is formed in aqueous solution by a base-induced tandem β-elimination/hetero-Michael addition rearrangement of (E)-N-benzoylethylpyridinium-4-oximate, the novel synthetic route to this engaging target class of compounds. The complete mechanistic picture of this rearrangement process was presented and discussed in terms of the E1cb reaction scheme within the rate-limiting β-elimination step.

  2. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  3. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  4. Definitions of non-stationary vibration power for time-frequency analysis and computational algorithms based upon harmonic wavelet transform

    NASA Astrophysics Data System (ADS)

    Heo, YongHwa; Kim, Kwang-joon

    2015-02-01

    While the vibration power for a set of harmonic force and velocity signals is well defined and known, it is not as popular yet for a set of stationary random force and velocity processes, although it can be found in some literatures. In this paper, the definition of the vibration power for a set of non-stationary random force and velocity signals will be derived for the purpose of a time-frequency analysis based on the definitions of the vibration power for the harmonic and stationary random signals. The non-stationary vibration power, defined as the short-time average of the product of the force and velocity over a given frequency range of interest, can be calculated by three methods: the Wigner-Ville distribution, the short-time Fourier transform, and the harmonic wavelet transform. The latter method is selected in this paper because band-pass filtering can be done without phase distortions, and the frequency ranges can be chosen very flexibly for the time-frequency analysis. Three algorithms for the time-frequency analysis of the non-stationary vibration power using the harmonic wavelet transform are discussed. The first is an algorithm for computation according to the full definition, while the others are approximate. Noting that the force and velocity decomposed into frequency ranges of interest by the harmonic wavelet transform are constructed with coefficients and basis functions, for the second algorithm, it is suggested to prepare a table of time integrals of the product of the basis functions in advance, which are independent of the signals under analysis. How to prepare and utilize the integral table are presented. The third algorithm is based on an evolutionary spectrum. Applications of the algorithms to the time-frequency analysis of the vibration power transmitted from an excitation source to a receiver structure in a simple mechanical system consisting of a cantilever beam and a reaction wheel are presented for illustration.

  5. WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation

    SciTech Connect

    Han, D; Williamson, J; Siebers, J

    2014-06-15

    Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water and water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.

  6. Life extensions of existing power plants -- Assessment and monitoring with personal computer

    SciTech Connect

    Koch, J.; Kaminski, T.

    1998-07-01

    In many industrial countries of the World the marginal conditions for the construction and use of fossil-fired power plants changed. Location problems as well as lengthy and protracted permitting procedures for new plants revealed the need for a method for the extension of the service life of existing plants. Owner of power plants show interest in operating their power plants units for 15--25 years exceeding their design period. Thus a service life assessment including the investigation of the steps necessary for NO{sub x} reduction and desulfurization certainly shows service life extension is an alternative to the construction of a new power plant. Moreover, this investigation can increase the plant reliability. For the life extension of a power plant at first the assessment of the actual condition is of importance. The components relevant for the service life are selected from such parts which are subjected e.g. to weathering, wear, temperatures above 450 C and alternating expansion stresses taking into account the present operating loads and experiences. Generally, the components to be considered critical have to be subjected to a special test. Their actual condition is determined in detail by means of check calculation and tests. For the inspection of creep-stressed components first a check calculation of the components is carried out. It has to be admitted that these calculations, however, can have a certain degree of inaccuracy due to various criteria which can be determined partly by tests. To minimize the above mentioned uncertainties to a large extent one is forced to thread other paths to obtain information about the actual condition of the component. An important measure is the visual check of the components and non destructive tests.

  7. The analysis of diagnostics possibilities of the Dual- Drive electric power steering system using diagnostics scanner and computer method

    NASA Astrophysics Data System (ADS)

    Szczypiński-Sala, W.; Dobaj, K.

    2016-09-01

    The article presents the analysis of diagnostics possibilities of electric power steering system using computer diagnostics scanner. Several testing attempts were performed. There were analyzed the changes of torque moment exerted on steering wheel by the driver and the changes of the angle of rotation steering wheel accompanying them. The tests were conducted in variable conditions comprising wheel load and the friction coefficient of tyre road interaction. Obtained results enabled the analysis of the influence of changeable operations conditions, possible to acquire in diagnostics scanners of chosen parameters of electric power steering system. Moreover, simulation model of operation, electric drive power steering system with the use of the Matlab simulation software was created. The results of the measurements obtained in road conditions served to verify this model. Subsequently, model response to inputs change of the device was analyzed and its reaction to various constructional and exploitative parameters was checked. The entirety of conducted work constitutes a step to create a diagnostic monitor possible to use in self-diagnosis of electric power steering system.

  8. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  9. High gamma-power predicts performance in sensorimotor-rhythm brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Grosse-Wentrup, Moritz; Schölkopf, Bernhard

    2012-08-01

    Subjects operating a brain-computer interface (BCI) based on sensorimotor rhythms exhibit large variations in performance over the course of an experimental session. Here, we show that high-frequency γ-oscillations, originating in fronto-parietal networks, predict such variations on a trial-to-trial basis. We interpret this finding as empirical support for an influence of attentional networks on BCI performance via modulation of the sensorimotor rhythm.

  10. High γ-power predicts performance in sensorimotor-rhythm brain-computer interfaces.

    PubMed

    Grosse-Wentrup, Moritz; Schölkopf, Bernhard

    2012-08-01

    Subjects operating a brain-computer interface (BCI) based on sensorimotor rhythms exhibit large variations in performance over the course of an experimental session. Here, we show that high-frequency γ-oscillations, originating in fronto-parietal networks, predict such variations on a trial-to-trial basis. We interpret this finding as empirical support for an influence of attentional networks on BCI performance via modulation of the sensorimotor rhythm.

  11. A computational analysis of natural convection in a vertical channel with a modified power law non-Newtonian fluid

    SciTech Connect

    Lee, S.R.; Irvine, T.F. Jr.; Greene, G.A.

    1998-04-01

    An implicit finite difference method was applied to analyze laminar natural convection in a vertical channel with a modified power law fluid. This fluid model was chosen because it describes the viscous properties of a pseudoplastic fluid over the entire shear rate range likely to be found in natural convection flows since it covers the shear rate range from Newtonian through transition to simple power law behavior. In addition, a dimensionless similarity parameter is identified which specifies in which of the three regions a particular system is operating. The results for the average channel velocity and average Nusselt number in the asymptotic Newtonian and power law regions are compared with numerical data in the literature. Also, graphical results are presented for the velocity and temperature fields and entrance lengths. The results of average channel velocity and Nusselt number are given in the three regions including developing and fully developed flows. As an example, a pseudoplastic fluid (carboxymethyl cellulose) was chosen to compare the different results of average channel velocity and Nusselt number between a modified power law fluid and the conventional power law model. The results show, depending upon the operating conditions, that if the correct model is not used, gross errors can result.

  12. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    PubMed Central

    Han, Dong; Siebers, Jeffrey V.; Williamson, Jeffrey F.

    2016-01-01

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl2 aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type

  13. The Meaning and Computation of Causal Power: Comment on Cheng (1997) and Novick and Cheng (2004)

    ERIC Educational Resources Information Center

    Luhmann, Christian C.; Ahn, Woo-kyoung

    2005-01-01

    D. Hume (1739/1987) argued that causality is not observable. P. W. Cheng claimed to present "a theoretical solution to the problem of causal induction first posed by Hume more than two and a half centuries ago" (p. 398) in the form of the power PC theory (L. R. Novick & P. W. Cheng). This theory claims that people's goal in causal induction is to…

  14. Logarithmic divergences in the k-inflationary power spectra computed through the uniform approximation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.; Kubota, Takahiro; Naylor, Wade

    2016-02-01

    We investigate a calculation method for solving the Mukhanov-Sasaki equation in slow-roll k-inflation based on the uniform approximation (UA) in conjunction with an expansion scheme for slow-roll parameters with respect to the number of e-folds about the so-called turning point. Earlier works on this method have so far gained some promising results derived from the approximating expressions for the power spectra among others, up to second order with respect to the Hubble and sound flow parameters, when compared to other semi-analytical approaches (e.g., Green's function and WKB methods). However, a closer inspection is suggestive that there is a problem when higher-order parts of the power spectra are considered; residual logarithmic divergences may come out that can render the prediction physically inconsistent. Looking at this possibility, we map out up to what order with respect to the mentioned parameters several physical quantities can be calculated before hitting a logarithmically divergent result. It turns out that the power spectra are limited up to second order, the tensor-to-scalar ratio up to third order, and the spectral indices and running converge to all orders. This indicates that the expansion scheme is incompatible with the working equations derived from UA for the power spectra but compatible with that of the spectral indices. For those quantities that involve logarithmically divergent terms in the higher-order parts, existing results in the literature for the convergent lower-order parts calculated in the equivalent fashion should be viewed with some caution; they do not rest on solid mathematical ground.

  15. Computation of transient dynamics of energy power for a dissipative two state system

    NASA Astrophysics Data System (ADS)

    Carrega, M.; Solinas, P.; Braggio, A.; Sassetti, M.

    2016-05-01

    We consider a two-level system coupled to a thermal bath and we investigate the variation of energy transferred to the reservoir as a function of time. The physical quantity under investigation is the time-dependent quantum average power. We compare quantum master equation approaches with the functional influence method. Differences and similarities between the methods are analysed, showing deviations at low temperature between the functional integral approach and the predictions based on master equations.

  16. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  17. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  18. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  19. Emergent Power-Law Phase in the 2D Heisenberg Windmill Antiferromagnet: A Computational Experiment

    NASA Astrophysics Data System (ADS)

    Jeevanesan, Bhilahari; Chandra, Premala; Coleman, Piers; Orth, Peter P.

    2015-10-01

    In an extensive computational experiment, we test Polyakov's conjecture that under certain circumstances an isotropic Heisenberg model can develop algebraic spin correlations. We demonstrate the emergence of a multispin U(1) order parameter in a Heisenberg antiferromagnet on interpenetrating honeycomb and triangular lattices. The correlations of this relative phase angle are observed to decay algebraically at intermediate temperatures in an extended critical phase. Using finite-size scaling we show that both phase transitions are of the Berezinskii-Kosterlitz-Thouless type, and at lower temperatures we find long-range Z6 order.

  20. Emergent Power-Law Phase in the 2D Heisenberg Windmill Antiferromagnet: A Computational Experiment.

    PubMed

    Jeevanesan, Bhilahari; Chandra, Premala; Coleman, Piers; Orth, Peter P

    2015-10-23

    In an extensive computational experiment, we test Polyakov's conjecture that under certain circumstances an isotropic Heisenberg model can develop algebraic spin correlations. We demonstrate the emergence of a multispin U(1) order parameter in a Heisenberg antiferromagnet on interpenetrating honeycomb and triangular lattices. The correlations of this relative phase angle are observed to decay algebraically at intermediate temperatures in an extended critical phase. Using finite-size scaling we show that both phase transitions are of the Berezinskii-Kosterlitz-Thouless type, and at lower temperatures we find long-range Z(6) order.

  1. Computational prediction of tube erosion in coal fired power utility boilers

    SciTech Connect

    Lee, B.E.; Fletcher, C.A.J.; Behnia, M.

    1999-10-01

    Erosion of boiler tubes causes serious operational problems in many pulverized coal-fired utility boilers. A new erosion model has been developed in the present study for the prediction of boiler tube erosion. The Lagrangian approach is employed to predict the behavior of the particulate phase. The results of computational prediction of boiler tube erosion and the various parameters causing erosion are discussed in this paper. Comparison of the numerical predictions for a single tube erosion with experimental data shows very good agreement.

  2. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  3. Fossil-fuel power plants: Computer systems for power plant control, maintenance, and operation. October 1976-December 1989 (A Bibliography from the COMPENDEX data base). Report for October 1976-December 1989

    SciTech Connect

    Not Available

    1990-02-01

    This bibliography contains citations concerning fossil-fuel power plant computer systems. Minicomputer and microcomputer systems used for monitoring, process control, performance calculations, alarming, and administrative applications are discussed. Topics emphasize power plant control, maintenance and operation. (Contains 240 citations fully indexed and including a title list.)

  4. A computational study of the aerodynamic forces and power requirements of dragonfly (Aeschna juncea) hovering.

    PubMed

    Sun, Mao; Lan, Shi Long

    2004-05-01

    Aerodynamic force generation and mechanical power requirements of a dragonfly (Aeschna juncea) in hovering flight are studied. The method of numerically solving the Navier-Stokes equations in moving overset grids is used. When the midstroke angles of attack in the downstroke and the upstroke are set to 52 degrees and 8 degrees, respectively (these values are close to those observed), the mean vertical force equals the insect weight, and the mean thrust is approximately zero. There are two large vertical force peaks in one flapping cycle. One is in the first half of the cycle, which is mainly due to the hindwings in their downstroke; the other is in the second half of the cycle, which is mainly due to the forewings in their downstroke. Hovering with a large stroke plane angle (52 degrees ), the dragonfly uses drag as a major source for its weight-supporting force (approximately 65% of the total vertical force is contributed by the drag and 35% by the lift of the wings). The vertical force coefficient of a wing is twice as large as the quasi-steady value. The interaction between the fore- and hindwings is not very strong and is detrimental to the vertical force generation. Compared with the case of a single wing in the same motion, the interaction effect reduces the vertical forces on the fore- and hindwings by 14% and 16%, respectively, of that of the corresponding single wing. The large vertical force is due to the unsteady flow effects. The mechanism of the unsteady force is that in each downstroke of the hindwing or the forewing, a new vortex ring containing downward momentum is generated, giving an upward force. The body-mass-specific power is 37 W kg(-1), which is mainly contributed by the aerodynamic power.

  5. The power of virtual integration: an interview with Dell Computer's Michael Dell. Interview by Joan Magretta.

    PubMed

    Dell, M

    1998-01-01

    Michael Dell started his computer company in 1984 with a simple business insight. He could bypass the dealer channel through which personal computers were then being sold and sell directly to customers, building products to order. Dell's direct model eliminated the dealer's markup and the risks associated with carrying large inventories of finished goods. In this interview, Michael Dell provides a detailed description of how his company is pushing that business model one step further, toward what he calls virtual integration. Dell is using technology and information to blur the traditional boundaries in the value chain between suppliers, manufacturers, and customers. The individual pieces of Dell's strategy--customer focus, supplier partnerships, mass customization, just-in-time manufacturing--may be all be familiar. But Michael Dell's business insight into how to combine them is highly innovative. Direct relationships with customers create valuable information, which in turn allows the company to coordinate its entire value chain back through manufacturing to product design. Dell describes how his company has come to achieve this tight coordination without the "drag effect" of ownership. Dell reaps the advantages of being vertically integrated without incurring the costs, all the while achieving the focus, agility, and speed of a virtual organization. As envisioned by Michael Dell, virtual integration may well become a new organizational model for the information age. PMID:10177868

  6. The power of virtual integration: an interview with Dell Computer's Michael Dell. Interview by Joan Magretta.

    PubMed

    Dell, M

    1998-01-01

    Michael Dell started his computer company in 1984 with a simple business insight. He could bypass the dealer channel through which personal computers were then being sold and sell directly to customers, building products to order. Dell's direct model eliminated the dealer's markup and the risks associated with carrying large inventories of finished goods. In this interview, Michael Dell provides a detailed description of how his company is pushing that business model one step further, toward what he calls virtual integration. Dell is using technology and information to blur the traditional boundaries in the value chain between suppliers, manufacturers, and customers. The individual pieces of Dell's strategy--customer focus, supplier partnerships, mass customization, just-in-time manufacturing--may be all be familiar. But Michael Dell's business insight into how to combine them is highly innovative. Direct relationships with customers create valuable information, which in turn allows the company to coordinate its entire value chain back through manufacturing to product design. Dell describes how his company has come to achieve this tight coordination without the "drag effect" of ownership. Dell reaps the advantages of being vertically integrated without incurring the costs, all the while achieving the focus, agility, and speed of a virtual organization. As envisioned by Michael Dell, virtual integration may well become a new organizational model for the information age.

  7. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    SciTech Connect

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  8. SUNBURN: A computer code for evaluating the economic viability of hybrid solar central receiver electric power plants

    SciTech Connect

    Chiang, C.J.

    1987-06-01

    The computer program SUNBURN simulates the annual performance of solar-only, solar-hybrid, and fuel-only electric power plants. SUNBURN calculates the levelized value of electricity generated by, and the levelized cost of, these plants. Central receiver solar technology is represented, with molten salt as the receiver coolant and thermal storage medium. For each hour of a year, the thermal energy use, or dispatch, strategy of SUNBURN maximizes the value of electricity by operating the turbine when the demand for electricity is greatest and by minimizing overflow of thermal storage. Fuel is burned to augment solar energy if the value of electricity generated by using fuel is greater than the cost of the fuel consumed. SUNBURN was used to determine the optimal power plant configuration, based on value-to-cost ratio, for dates of initial plant operation from 1990 to 1998. The turbine size for all plants was 80 MWe net. Before 1994, fuel-only was found to be the preferred plant configuration. After 1994, a solar-only plant was found to have the greatest value-to-cost ratio. A hybrid configuration was never found to be better than both fuel-only and solar-only configurations. The value of electricity was calculated as The Southern California Edison Company's avoided generation costs of electricity. These costs vary with time of day. Utility ownership of the power plants was assumed. The simulation was performed using weather data recorded in Barstow, California, in 1984.

  9. Piezoelectronics: a novel, high-performance, low-power computer switching technology

    NASA Astrophysics Data System (ADS)

    Newns, D. M.; Martyna, G. J.; Elmegreen, B. G.; Liu, X.-H.; Theis, T. N.; Trolier-McKinstry, S.

    2012-06-01

    Current switching speeds in CMOS technology have saturated since 2003 due to power constraints arising from the inability of line voltage to be further lowered in CMOS below about 1V. We are developing a novel switching technology based on piezoelectrically transducing the input or gate voltage into an acoustic wave which compresses a piezoresistive (PR) material forming the device channel. Under pressure the PR undergoes an insulator-to-metal transition which makes the channel conducting, turning on the device. A piezoelectric (PE) transducer material with a high piezoelectric coefficient, e.g. a domain-engineered relaxor piezoelectric, is needed to achieve low voltage operation. Suitable channel materials manifesting a pressure-induced metal-insulator transition can be found amongst rare earth chalcogenides, transition metal oxides, etc.. Mechanical requirements include a high PE/PR area ratio to step up pressure, a rigid surround material to constrain the PE and PR external boundaries normal to the strain axis, and a void space to enable free motion of the component side walls. Using static mechanical modeling and dynamic electroacoustic simulations, we optimize device structure and materials and predict performance. The device, termed a PiezoElectronic Transistor (PET) can be used to build complete logic circuits including inverters, flip-flops, and gates. This "Piezotronic" logic is predicted to have a combination of low power and high speed operation.

  10. New approach for precise computation of Lyman-α forest power spectrum with hydrodynamical simulations

    SciTech Connect

    Borde, Arnaud; Palanque-Delabrouille, Nathalie; Rossi, Graziano; Yèche, Christophe; LeGoff, Jean-Marc; Rich, Jim; Bolton, James S. E-mail: nathalie.palanque-delabrouille@cea.fr E-mail: matteoviel@gmail.com E-mail: christophe.yeche@cea.fr E-mail: james.rich@cea.fr

    2014-07-01

    Current experiments are providing measurements of the flux power spectrum from the Lyman-α forests observed in quasar spectra with unprecedented accuracy. Their interpretation in terms of cosmological constraints requires specific simulations of at least equivalent precision. In this paper, we present a suite of cosmological N-body simulations with cold dark matter and baryons, specifically aiming at modeling the low-density regions of the inter-galactic medium as probed by the Lyman-α forests at high redshift. The simulations were run using the GADGET-3 code and were designed to match the requirements imposed by the quality of the current SDSS-III/BOSS or forthcoming SDSS-IV/eBOSS data. They are made using either 2 × 768{sup 3} ≅ 1 billion or 2 × 192{sup 3} ≅ 14 million particles, spanning volumes ranging from (25 Mpc h{sup −1}){sup 3} for high-resolution simulations to (100 Mpc h{sup −1}){sup 3} for large-volume ones. Using a splicing technique, the resolution is further enhanced to reach the equivalent of simulations with 2 × 3072{sup 3} ≅ 58 billion particles in a (100 Mpc h{sup −1}){sup 3} box size, i.e. a mean mass per gas particle of 1.2 × 10{sup 5}M{sub ⊙} h{sup −1}. We show that the resulting power spectrum is accurate at the 2% level over the full range from a few Mpc to several tens of Mpc. We explore the effect on the one-dimensional transmitted-flux power spectrum of four cosmological parameters (n{sub s}, σ{sub 8}, Ω{sub m} and H{sub 0}) and two astrophysical parameters (T{sub 0} and γ) that are related to the heating rate of the intergalactic medium. By varying the input parameters around a central model chosen to be in agreement with the latest Planck results, we built a grid of simulations that allows the study of the impact on the flux power spectrum of these six relevant parameters. We improve upon previous studies by not only measuring the effect of each parameter individually, but also probing the impact of the

  11. Processing power limits social group size: computational evidence for the cognitive costs of sociality.

    PubMed

    Dávid-Barrett, T; Dunbar, R I M

    2013-08-22

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses.

  12. Processing power limits social group size: computational evidence for the cognitive costs of sociality

    PubMed Central

    Dávid-Barrett, T.; Dunbar, R. I. M.

    2013-01-01

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623

  13. A computer study of radionuclide production in high power accelerators for medical and industrial applications

    NASA Astrophysics Data System (ADS)

    Van Riper, K. A.; Mashnik, S. G.; Wilson, W. B.

    2001-05-01

    Methods for radionuclide production calculation in a high power proton accelerator have been developed and applied to study production of 22 isotopes by high-energy protons and neutrons. These methods are readily applicable to accelerator, and reactor, environments other than the particular model we considered and to the production of other radioactive and stable isotopes. We have also developed methods for evaluating cross sections from a wide variety of sources into a single cross section set and have produced an evaluated library covering about a third of all natural elements. These methods also are applicable to an expanded set of reactions. A 684 page detailed report on this study, with 37 tables and 264 color figures is available on the Web at http://t2.lanl.gov/publications/publications.html, or, if not accessible, in hard copy from the authors.

  14. Electronic stopping power calculation for water under the Lindhard formalism for application in proton computed tomography

    NASA Astrophysics Data System (ADS)

    Guerrero, A. F.; Mesa, J.

    2016-07-01

    Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.

  15. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  16. Computed tomography: a powerful imaging technique in the fields of dimensional metrology and quality control

    NASA Astrophysics Data System (ADS)

    Probst, Gabriel; Boeckmans, Bart; Dewulf, Wim; Kruth, Jean-Pierre

    2016-05-01

    X-ray computed tomography (CT) is slowly conquering its space in the manufacturing industry for dimensional metrology and quality control purposes. The main advantage is its non-invasive and non-destructive character. Currently, CT is the only measurement technique that allows full 3D visualization of both inner and outer features of an object through a contactless probing system. Using hundreds of radiographs, acquired while rotating the object, a 3D representation is generated and dimensions can be verified. In this research, this non-contact technique was used for the inspection of assembled components. A dental cast model with 8 implants, connected by a screwed retained bar made of titanium. The retained bar includes a mating interface connection that should ensure a perfect fitting without residual stresses when the connection is fixed with screws. CT was used to inspect the mating interfaces between these two components. Gaps at the connections can lead to bacterial growth and potential inconvenience for the patient who would have to face a new surgery to replace his/hers prosthesis. With the aid of CT, flaws in the design or manufacturing process that could lead to gaps at the connections could be assessed.

  17. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  18. Computational Work to Support FAP/SRW Variable-Speed Power-Turbine Development

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to document the work done to enable a NASA CFD code to model the transition on a blade. The purpose of the present work is to down-select a transition model that would allow the flow simulation of a Variable-Speed Power-Turbine (VSPT) to be accurately performed. The modeling is to be ultimately performed to also account for the blade row interactions and effect on transition and therefore accurate accounting for losses. The present work is limited to steady flows. The low Reynolds number k-omega model of Wilcox and a modified version of same will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k-omega model and its modified variant fail to simulate the transition with any degree of accuracy. A case is therefore made for more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored and the Walters and Leylek model which was thought to be in a more mature state of development is introduced and implemented in the Glenn-HT code. Two-dimensional flat plate results and three-dimensional results for flow over turbine blades and the resulting heat transfer and its transitional behavior are reported. It is shown that the transition simulation is much improved over the baseline k-omega model.

  19. Optical computing for application to reducing the thickness of high-power-composite lenses.

    PubMed

    Wu, Bo-Wen

    2014-10-10

    With the adoption of polycarbonate lens material for injection molding of greater accuracy and at lower costs, polycarbonate has become very suitable for mass production of more economical products, such as diving goggles. However, with increasing requirements for visual quality improvement, lenses need to have not only refractive function but also thickness and spherical aberration, which are gradually being taken more seriously. For a high-power-composite lens, meanwhile, the thickness cannot be substantially reduced, and there is also the issue of severe spherical aberration at the lens edges. In order to increase the added value of the product without changing the material, the present research applied the eye model and Taguchi experiment method, combined with design optimization for hyperbolic-aspherical lens, to significantly reduce the lens thickness by more than 30%, outperforming the average thickness reduction in general aspherical lens. The spherical aberration at the lens edges was also reduced effectively during the optimization process for the nonspherical lens. Prototypes made by super-finishing machines were among the results of the experiment. This new application can be used in making a large amount of injection molds to substantially increase the economic value of the product. PMID:25322434

  20. Adaptive on-line classification for EEG-based brain computer interfaces with AAR parameters and band power estimates.

    PubMed

    Vidaurre, C; Schlögl, A; Cabeza, R; Scherer, R; Pfurtscheller, G

    2005-11-01

    We present the result of on-line feedback Brain Computer Interface experiments using adaptive and non-adaptive feature extraction methods with an on-line adaptive classifier based on Quadratic Discriminant Analysis. Experiments were performed with 12 naïve subjects, feedback was provided from the first moment and no training sessions were needed. Experiments run in three different days with each subject. Six of them received feedback with Adaptive Autoregressive parameters and the rest with logarithmic Band Power estimates. The study was done using single trial analysis of each of the sessions and the value of the Error Rate and the Mutual Information of the classification were used to discuss the results. Finally, it was shown that even subjects starting with a low performance were able to control the system in a few hours: and contrary to previous results no differences between AAR and BP estimates were found.

  1. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    NASA Astrophysics Data System (ADS)

    Sabchevski, S.; Zhelyazkov, I.; Benova, E.; Atanassov, V.; Dankov, P.; Thumm, M.; Arnold, A.; Jin, J.; Rzesnicki, T.

    2006-07-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems.

  2. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  3. Computational mechanics

    SciTech Connect

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  4. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  5. Light Water Reactor Sustainability Program: Computer-Based Procedures for Field Activities: Results from Three Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    2014-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. Nearly all activities in the nuclear power industry are guided by procedures, which today are printed and executed on paper. This paper-based procedure process has proven to ensure safety; however, there are improvements to be gained. Due to its inherent dynamic nature, a CBP provides the opportunity to incorporate context driven job aids, such as drawings, photos, and just-in-time training. Compared to the static state of paper-based procedures (PBPs), the presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps.

  6. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  7. Thermal noise informatics: totally secure communication via a wire, zero-power communication, and thermal noise driven computing

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Mingesz, Robert; Gingl, Zoltan

    2007-06-01

    Very recently, it has been shown that Gaussian thermal noise and its artificial versions (Johnson-like noises) can be utilized as an information carrier with peculiar properties therefore it may be proper to call this topic Thermal Noise Informatics. Zero Power (Stealth) Communication, Thermal Noise Driven Computing, and Totally Secure Classical Communication are relevant examples. In this paper, while we will briefly describe the first and the second subjects, we shall focus on the third subject, the secure classical communication via wire. This way of secure telecommunication utilizes the properties of Johnson(-like) noise and those of a simple Kirchhoff's loop. The communicator is unconditionally secure at the conceptual (circuit theoretical) level and this property is (so far) unique in communication systems based on classical physics. The communicator is superior to quantum alternatives in all known aspects, except the need of using a wire. In the idealized system, the eavesdropper can extract zero bit of information without getting uncovered. The scheme is naturally protected against the man-in-the-middle attack. The communication can take place also via currently used power lines or phone (wire) lines and it is not only a point-to-point communication like quantum channels but network-ready. We report that a pair of Kirchhoff-Loop-Johnson(-like)-Noise communicators, which is able to work over variable ranges, was designed and built. Tests have been carried out on a model-line with ranges beyond the ranges of any known direct quantum communication channel and they indicate unrivalled signal fidelity and security performance. This simple device has single-wire secure key generation/sharing rates of 0.1, 1, 10, and 100 bit/second for copper wires with diameters/ranges of 21 mm / 2000 km, 7 mm / 200 km, 2.3 mm / 20 km, and 0.7 mm / 2 km, respectively and it performs with 0.02% raw-bit error rate (99.98 % fidelity). The raw-bit security of this practical system

  8. Additivity of Factor Effects in Reading Tasks Is Still a Challenge for Computational Models: Reply to Ziegler, Perry, and Zorzi (2009)

    ERIC Educational Resources Information Center

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…

  9. Computer modelling integrated with micro-CT and material testing provides additional insight to evaluate bone treatments: Application to a beta-glycan derived whey protein mice model.

    PubMed

    Sreenivasan, D; Tu, P T; Dickinson, M; Watson, M; Blais, A; Das, R; Cornish, J; Fernandez, J

    2016-01-01

    The primary aim of this study was to evaluate the influence of a whey protein diet on computationally predicted mechanical strength of murine bones in both trabecular and cortical regions of the femur. There was no significant influence on mechanical strength in cortical bone observed with increasing whey protein treatment, consistent with cortical tissue mineral density (TMD) and bone volume changes observed. Trabecular bone showed a significant decline in strength with increasing whey protein treatment when nanoindentation derived Young׳s moduli were used in the model. When microindentation, micro-CT phantom density or normalised Young׳s moduli were included in the model a non-significant decline in strength was exhibited. These results for trabecular bone were consistent with both trabecular bone mineral density (BMD) and micro-CT indices obtained independently. The secondary aim of this study was to characterise the influence of different sources of Young׳s moduli on computational prediction. This study aimed to quantify the predicted mechanical strength in 3D from these sources and evaluate if trends and conclusions remained consistent. For cortical bone, predicted mechanical strength behaviour was consistent across all sources of Young׳s moduli. There was no difference in treatment trend observed when Young׳s moduli were normalised. In contrast, trabecular strength due to whey protein treatment significantly reduced when material properties from nanoindentation were introduced. Other material property sources were not significant but emphasised the strength trend over normalised material properties. This shows strength at the trabecular level was attributed to both changes in bone architecture and material properties.

  10. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Articles Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    It is important to realize that some test-articles may have significant sound absorption that may challenge the acoustic power capabilities of a test facility. Therefore, to mitigate this risk of not being able to meet the customers target spectrum, it is prudent to demonstrate early-on an increased acoustic power capability which compensates for this test-article absorption. This paper describes a concise method to reduce this risk when testing aerospace test-articles which have significant absorption. This method was successfully applied during the SpaceX Falcon 9 Payload Fairing acoustic test program at the NASA Glenn Research Center Plum Brook Stations RATF.

  11. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    SciTech Connect

    Gering, Kevin L.

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity at characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

  12. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity atmore » characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

  13. Do We Really Need Additional Contrast-Enhanced Abdominal Computed Tomography for Differential Diagnosis in Triage of Middle-Aged Subjects With Suspected Biliary Pain

    PubMed Central

    Hwang, In Kyeom; Lee, Yoon Suk; Kim, Jaihwan; Lee, Yoon Jin; Park, Ji Hoon; Hwang, Jin-Hyeok

    2015-01-01

    Abstract Enhanced computed tomography (CT) is widely used for evaluating acute biliary pain in the emergency department (ED). However, concern about radiation exposure from CT has also increased. We investigated the usefulness of pre-contrast CT for differential diagnosis in middle-aged subjects with suspected biliary pain. A total of 183 subjects, who visited the ED for suspected biliary pain from January 2011 to December 2012, were included. Retrospectively, pre-contrast phase and multiphase CT findings were reviewed and the detection rate of findings suggesting disease requiring significant treatment by noncontrast CT (NCCT) was compared with cases detected by multiphase CT. Approximately 70% of total subjects had a significant condition, including 1 case of gallbladder cancer and 126 (68.8%) cases requiring intervention (122 biliary stone-related diseases, 3 liver abscesses, and 1 liver hemangioma). The rate of overlooking malignancy without contrast enhancement was calculated to be 0% to 1.5%. Biliary stones and liver space-occupying lesions were found equally on NCCT and multiphase CT. Calculated probable rates of overlooking acute cholecystitis and biliary obstruction were maximally 6.8% and 4.2% respectively. Incidental significant finding unrelated with pain consisted of 1 case of adrenal incidentaloma, which was also observed in NCCT. NCCT might be sufficient to detect life-threatening or significant disease requiring early treatment in young adults with biliary pain. PMID:25700321

  14. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  15. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  16. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  17. Computer simulation for the growing probability of additional offspring with an advantageous reversal allele in the decoupled continuous-time mutation-selection model

    NASA Astrophysics Data System (ADS)

    Gill, Wonpyong

    2016-01-01

    This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.

  18. KS-FSOPS: A computer-aided simulation system for the in-core fuel shuffling operation for Taipower`s Kuosheng nuclear power plant

    SciTech Connect

    Kuo, W.S.; Song, T.C.

    1996-08-01

    A computer-aided simulation system for the in-core refueling shuffle operation was developed for the Kuosheng nuclear power plant of Taiwan Power Company. With this specially designed system (KS-FSOPS), the complete and complex fuel shuffling sequences can be clearly and vividly displayed with color graphics on a personal computer. Nuclear engineers can use KS-FSOPS to simulate the process of fuel shuffling operation, identify the potential safety problems which can not be easily found manually, and simultaneously monitor the shuffling sequences with on-site operation in the refueling building. In effect, the traditional but inefficient take-board display can be replaced with this fancy system. Developed on the Windows 3.1 environment and implemented on an 80486 personal computer, KS-FSOPS is a handy and table tool to assist nuclear engineers in the refueling operation. Potential safety issues such as the constraint of cold shutdown margin, the falling of control rods, the restriction f control rod withdrawal, and the correctness of shuffling positions, are continuously checked during the refueling operation. KS-FSOPS has been used in the most recent refueling outage for the Kuosheng nuclear power plant. In the near future, the system will be extended to other Taipower`s nuclear power plants.

  19. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customer's aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facility's available acoustic power capability becomes maximized with the test-article installed during the actual test then the customer's environment requirement may become compromised. In order to understand the risk of not achieving the customer's in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Station's Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  20. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption during Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customers aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facilitys available acoustic power capability becomes maximized with the test-article installed during the actual test then the customers environment requirement may become compromised. In order to understand the risk of not achieving the customers in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Stations Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  1. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  2. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom

    SciTech Connect

    Moyers, M. F.

    2014-06-15

    Purpose: Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. Methods: A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. Results: For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. Conclusions: The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote

  3. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  4. Computational electronics and electromagnetics

    SciTech Connect

    Shang, C. C.

    1997-02-01

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domain CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.

  5. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  6. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  7. An improved computational technique for calculating electromagnetic forces and power absorptions generated in spherical and deformed body in levitation melting devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results of previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  8. Ti-direct, powerful, stereoselective aldol-type additions of esters and thioesters to carbonyl compounds: application to the synthesis and evaluation of lactone analogs of jasmone perfumes.

    PubMed

    Nagase, Ryohei; Matsumoto, Noriaki; Hosomi, Kohei; Higashi, Takahiro; Funakoshi, Syunsuke; Misaki, Tomonori; Tanabe, Yoo

    2007-01-01

    An efficient TiCl(4)-Et(3)N or Bu(3)N-promoted aldol-type addition of phenyl and thiophenyl esters or thioaryl esters with aldehydes and ketones was performed (total 46 examples). The present method is advantageous from atom-economical and cost-effective viewpoints; good to excellent yields, moderate to good syn-selectivity, substrate variations, reagent availability, and simple procedures. Utilizing the present reaction as the key step, an efficient short synthesis of three lactone [2(5H)-furanone] analogs of jasmine perfumes was performed. Among them, the lactone analog of cis-jasmone had a unique perfume property (tabac).

  9. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator followed

  10. Computations of longitudinal electron dynamics in the recirculating cw RF accelerator-recuperator for the high average power FEL

    NASA Astrophysics Data System (ADS)

    Sokolov, A. S.; Vinokurov, N. A.

    1994-03-01

    The use of optimal longitudinal phase-energy motion conditions for bunched electrons in a recirculating RF accelerator gives the possibility to increase the final electron peak current and, correspondingly, the FEL gain. The computer code RECFEL, developed for simulations of the longitudinal compression of electron bunches with high average current, essentially loading the cw RF cavities of the recirculator-recuperator, is briefly described and illustrated by some computational results.

  11. Making classical and quantum canonical general relativity computable through a power series expansion in the inverse cosmological constant.

    PubMed

    Gambini, R; Pullin, J

    2000-12-18

    We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.

  12. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  13. Computational Design and Prototype Evaluation of Aluminide-Strengthened Ferritic Superalloys for Power-Generating Turbine Applications up to 1,033 K

    SciTech Connect

    Peter Liaw; Gautam Ghosh; Mark Asta; Morris Fine; Chain Liu

    2010-04-30

    prototype Fe-Ni-Cr-Al-Mo alloys. Three-point-bending experiments show that alloys containing more than 5 wt.% Al exhibit poor ductility (< 2%) at room temperature, and their fracture mode is predominantly of a cleavage type. Two major factors governing the poor ductility are (1) the volume fraction of NiAl-type precipitates, and (2) the Al content in the {alpha}-Fe matrix. A bend ductility of more than 5% can be achieved by lowering the Al concentration to 3 wt.% in the alloy. The alloy containing about 6.5 wt.% Al is found to have an optimal combination of hardness, ductility, and minimal creep rate at 973 K. A high volume fraction of precipitates is responsible for the good creep resistance by effectively resisting the dislocation motion through Orowan-bowing and dislocation-climb mechanisms. The effects of stress on the creep rate have been studied. With the threshold-stress compensation, the stress exponent is determined to be 4, indicating power-law dislocation creep. The threshold stress is in the range of 40-53 MPa. The addition of W can significantly reduce the secondary creep rates. Compared to other candidates for steam-turbine applications, FBB-8 does not show superior creep resistance at high stresses (> 100 MPa), but exhibit superior creep resistance at low stresses (< 60 MPa).

  14. Additive attacks on speaker recognition

    NASA Astrophysics Data System (ADS)

    Farrokh Baroughi, Alireza; Craver, Scott

    2014-02-01

    Speaker recognition is used to identify a speaker's voice from among a group of known speakers. A common method of speaker recognition is a classification based on cepstral coefficients of the speaker's voice, using a Gaussian mixture model (GMM) to model each speaker. In this paper we try to fool a speaker recognition system using additive noise such that an intruder is recognized as a target user. Our attack uses a mixture selected from a target user's GMM model, inverting the cepstral transformation to produce noise samples. In our 5 speaker data base, we achieve an attack success rate of 50% with a noise signal at 10dB SNR, and 95% by increasing noise power to 0dB SNR. The importance of this attack is its simplicity and flexibility: it can be employed in real time with no processing of an attacker's voice, and little computation is needed at the moment of detection, allowing the attack to be performed by a small portable device. For any target user, knowing that user's model or voice sample is sufficient to compute the attack signal, and it is enough that the intruder plays it while he/she is uttering to be classiffed as the victim.

  15. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data.

  16. Evaluation of computer-aided foundation design techniques for fossil fuel power plants. Final report. [Includes list of firms involved, equipment, software, etc

    SciTech Connect

    Kulhawy, F.H.; Dill, J.C.; Trautmann, C.H.

    1984-11-01

    The use of an integrated computer-aided drafting and design system for fossil fuel power plant foundations would offer utilities considerable savings in engineering costs and design time. The technology is available, but research is needed to develop software, a common data base, and data management procedures. An integrated CADD system suitable for designing power plant foundations should include the ability to input, display, and evaluate geologic, geophysical, geotechnical, and survey field data; methods for designing piles, mats, footings, drilled shafts, and other foundation types; and the capability of evaluating various load configurations, soil-structure interactions, and other construction factors that influence design. Although no such integrated system exists, the survey of CADD techniques showed that the technology is available to computerize the whole foundation design process, from single-foundation analysis under single loads to three-dimensional analysis under earthquake loads. The practices of design firms using CADD technology in nonutility applications vary widely. Although all the firms surveyed used computer-aided drafting, only two used computer graphics in routine design procedures, and none had an integrated approach to using CADD for geotechnical engineering. All the firms had developed corporate policies related to system security, supervision, overhead allocation, training, and personnel compensation. A related EPRI project RP2514, is developing guidelines for applying CADD systems to entire generating-plant construction projects. 4 references, 6 figures, 6 tables.

  17. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  18. 78 FR 47805 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ... quality assurance processes, and the requirements extend throughout the life cycle of the protection.... Revision 1 of RG 1.173, ``Developing Software Life Cycle Processes for Digital Computer Software used in... issued with a temporary identification as Draft Regulatory Guide, DG-1207 on August 22, 2012 (77 FR...

  19. Computer-aided modeling and prediction of performance of the modified Lundell class of alternators in space station solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Demerdash, Nabeel A. O.; Wang, Ren-Hong

    1988-01-01

    The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.

  20. Energy and cost analysis of a solar-hydrogen combined heat and power system for remote power supply using a computer simulation

    SciTech Connect

    Shabani, Bahman; Andrews, John; Watkins, Simon

    2010-01-15

    A simulation program, based on Visual Pascal, for sizing and techno-economic analysis of the performance of solar-hydrogen combined heat and power systems for remote applications is described. The accuracy of the submodels is checked by comparing the real performances of the system's components obtained from experimental measurements with model outputs. The use of the heat generated by the PEM fuel cell, and any unused excess hydrogen, is investigated for hot water production or space heating while the solar-hydrogen system is supplying electricity. A 5 kWh daily demand profile and the solar radiation profile of Melbourne have been used in a case study to investigate the typical techno-economic characteristics of the system to supply a remote household. The simulation shows that by harnessing both thermal load and excess hydrogen it is possible to increase the average yearly energy efficiency of the fuel cell in the solar-hydrogen system from just below 40% up to about 80% in both heat and power generation (based on the high heating value of hydrogen). The fuel cell in the system is conventionally sized to meet the peak of the demand profile. However, an economic optimisation analysis illustrates that installing a larger fuel cell could lead to up to a 15% reduction in the unit cost of the electricity to an average of just below 90 c/kWh over the assessment period of 30 years. Further, for an economically optimal size of the fuel cell, nearly a half the yearly energy demand for hot water of the remote household could be supplied by heat recovery from the fuel cell and utilising unused hydrogen in the exit stream. Such a system could then complement a conventional solar water heating system by providing the boosting energy (usually in the order of 40% of the total) normally obtained from gas or electricity. (author)

  1. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  2. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  3. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  4. PLANETSYS, a Computer Program for the Steady State and Transient Thermal Analysis of a Planetary Power Transmission System: User's Manual

    NASA Technical Reports Server (NTRS)

    Hadden, G. B.; Kleckner, R. J.; Ragen, M. A.; Dyba, G. J.; Sheynin, L.

    1981-01-01

    The material presented is structured to guide the user in the practical and correct implementation of PLANETSYS which is capable of simulating the thermomechanical performance of a multistage planetary power transmission. In this version of PLANETSYS, the user can select either SKF or NASA models in calculating lubricant film thickness and traction forces.

  5. A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface

    SciTech Connect

    Glueck, P.R.; Bahrami, K.A.

    1995-12-31

    The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar array short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.

  6. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  7. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    PubMed

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound.

  8. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  9. Computational Study of the Structure, the Flexibility, and the Electronic Circular Dichroism of Staurosporine - a Powerful Protein Kinase Inhibitor

    NASA Astrophysics Data System (ADS)

    Karabencheva-Christova, Tatyana G.; Singh, Warispreet; Christov, Christo Z.

    2014-07-01

    Staurosporine (STU) is a microbial alkaloid which is an universal kinase inhibitor. In order to understand its mechanism of action it is important to explore its structure-properties relationships. In this paper we provide the results of a computational study of the structure, the chiroptical properties, the conformational flexibility of STU as well as the correlation between the electronic circular dichroism (ECD) spectra and the structure of its complex with anaplastic lymphoma kinase.

  10. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    SciTech Connect

    Sunderam, V.S. . Dept. of Mathematics and Computer Science); Geist, G.A. )

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  11. A Summary Description of a Computer Program Concept for the Design and Simulation of Solar Pond Electric Power Generation Systems

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.

  12. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  13. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 1, Equations and numerics

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User`s Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  14. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 2, User`s guide and manual

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  15. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  16. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    PubMed

    Kawalia, Amit; Motameny, Susanne; Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files.

  17. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    PubMed

    Kawalia, Amit; Motameny, Susanne; Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  18. High Power, Computer-Controlled, LED-Based Light Sources for Fluorescence Imaging and Image-Guided Surgery

    PubMed Central

    Gioux, Sylvain; Kianzad, Vida; Ciocan, Razvan; Gupta, Sunil; Oketokoun, Rafiou; Frangioni, John V.

    2009-01-01

    Optical imaging requires appropriate light sources. For image-guided surgery, and in particular fluorescence-guided surgery, high fluence rate, long working distance, computer control, and precise control of wavelength are required. In this study, we describe the development of light emitting diode (LED)-based light sources that meet these criteria. These light sources are enabled by a compact LED module that includes an integrated linear driver, heat-dissipation technology, and real-time temperature monitoring. Measuring only 27 mm W by 29 mm H, and weighing only 14.7 g, each module provides up to 6500 lx of white (400-650 nm) light and up to 157 mW of filtered fluorescence excitation light, while maintaining an operating temperature ≤ 50°C. We also describe software that can be used to design multi-module light housings, and an embedded processor that permits computer control and temperature monitoring. With these tools, we constructed a 76-module, sterilizable, 3-wavelength surgical light source capable of providing up to 40,000 lx of white light, 4.0 mW/cm2 of 670 nm near-infrared (NIR) fluorescence excitation light, and 14.0 mW/cm2 of 760 nm NIR fluorescence excitation light over a 15-cm diameter field-of-view. Using this light source, we demonstrate NIR fluorescence-guided surgery in a large animal model. PMID:19723473

  19. An integrated experimental and computational approach to material selection for sound proof thermally insulted enclosure of a power generation system

    NASA Astrophysics Data System (ADS)

    Waheed, R.; Tarar, W.; Saeed, H. A.

    2016-08-01

    Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.

  20. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    SciTech Connect

    Abrahamson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  1. Is breathing rate a confounding variable in brain-computer interfaces (BCIs) based on EEG spectral power?

    PubMed

    Ibarra Chaoul, Andrea; Grosse-Wentrup, Moritz

    2015-08-01

    Brain-computer interfaces (BCIs) enable paralyzed patients to interact with the world by directly decoding brain activity. We investigated if systematic changes in breathing rate affect EEG bandpower features that are commonly used in BCIs. This is of particular interest for the development of cognitive BCIs for patients with artificial ventilation, e.g. for those in late stages of amyotrophic lateral sclerosis (ALS). If subjects can alter the spectrum of the EEG by changing their breathing rate, decoding results obtained with healthy subjects may not generalize to this patient population. We recorded a high-density EEG from twelve healthy subjects, who were instructed to alternate between fast and slow breathing. We do not find any statistically significant modulation of EEG bandpower. As such, changes in breathing rate are unlikely to substantially bias the performance of BCIs based on EEG bandpower features.

  2. Application of computational neural networks in predicting atmospheric pollutant concentrations due to fossil-fired electric power generation

    SciTech Connect

    El-Hawary, F.

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net based system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  3. Evaluating the Discriminatory Power of a Computer-based System for Assessing Penetrating Trauma on Retrospective Multi-Center Data

    PubMed Central

    Matheny, Michael E.; Ogunyemi, Omolola I.; Rice, Phillip L.; Clarke, John R.

    2005-01-01

    Objective To evaluate the discriminatory power of TraumaSCAN-Web, a system for assessing penetrating trauma, using retrospective multi-center case data for gunshot and stab wounds to the thorax and abdomen. Methods 80 gunshot and 114 stab cases were evaluated using TraumaSCAN-Web. Areas under the Receiver Operator Characteristic Curves (AUC) were calculated for each condition modeled in TraumaSCAN-Web. Results Of the 23 conditions modeled by TraumaSCAN-Web, 19 were present in either the gunshot or stab case data. The gunshot AUCs ranged from 0.519 (pericardial tamponade) to 0.975 (right renal injury). The stab AUCs ranged from 0.701 (intestinal injury) to 1.000 (tracheal injury). PMID:16779090

  4. Transition Metal Diborides as Electrode Material for MHD Direct Power Extraction: High-temperature Oxidation of ZrB2-HfB2 Solid Solution with LaB6 Addition

    NASA Astrophysics Data System (ADS)

    Sitler, Steven; Hill, Cody; Raja, Krishnan S.; Charit, Indrajit

    2016-06-01

    Transition metal borides are being considered for use as potential electrode coating materials in magnetohydrodynamic direct power extraction plants from coal-fired plasma. These electrode materials will be exposed to aggressive service conditions at high temperatures. Therefore, high-temperature oxidation resistance is an important property. Consolidated samples containing an equimolar solid solution of ZrB2-HfB2 with and without the addition of 1.8 mol pct LaB6 were prepared by ball milling of commercial boride material followed by spark plasma sintering. These samples were oxidized at 1773 K (1500 °C) in two different conditions: (1) as-sintered and (2) anodized (10 V in 0.1 M KOH electrolyte). Oxidation studies were carried out in 0.3 × 105 and 0.1 Pa oxygen partial pressures. The anodic oxide layers showed hafnium enrichment on the surface of the samples, whereas the high-temperature oxides showed zirconium enrichment. The anodized samples without LaB6 addition showed about 2.5 times higher oxidation resistance in high-oxygen partial pressures than the as-sintered samples. Addition of LaB6 improved the oxidation resistance in the as-sintered condition by about 30 pct in the high-oxygen partial pressure tests.

  5. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

    SciTech Connect

    Katya Le Blanc; Johanna Oxstrand

    2012-05-01

    Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

  6. Computational Study of the Impact of Unsteadiness on the Aerodynamic Performance of a Variable- Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2012-01-01

    The design-point and off-design performance of an embedded 1.5-stage portion of a variable-speed power turbine (VSPT) was assessed using Reynolds-Averaged Navier-Stokes (RANS) analyses with mixing-planes and sector-periodic, unsteady RANS analyses. The VSPT provides one means by which to effect the nearly 50 percent main-rotor speed change required for the NASA Large Civil Tilt-Rotor (LCTR) application. The change in VSPT shaft-speed during the LCTR mission results in blade-row incidence angle changes of as high as 55 . Negative incidence levels of this magnitude at takeoff operation give rise to a vortical flow structure in the pressure-side cove of a high-turn rotor that transports low-momentum flow toward the casing endwall. The intent of the effort was to assess the impact of unsteadiness of blade-row interaction on the time-mean flow and, specifically, to identify potential departure from the predicted trend of efficiency with shaft-speed change of meanline and 3-D RANS/mixing-plane analyses used for design.

  7. Effects of mental workload and fatigue on the P300, alpha and theta band power during operation of an ERP (P300) brain-computer interface.

    PubMed

    Käthner, Ivo; Wriessnegger, Selina C; Müller-Putz, Gernot R; Kübler, Andrea; Halder, Sebastian

    2014-10-01

    The study aimed at revealing electrophysiological indicators of mental workload and fatigue during prolonged usage of a P300 brain-computer interface (BCI). Mental workload was experimentally manipulated with dichotic listening tasks. Medium and high workload conditions alternated. Behavioral measures confirmed that the manipulation of mental workload was successful. Reduced P300 amplitude was found for the high workload condition. Along with lower performance and an increase in the subjective level of fatigue, an increase of power in the alpha band was found for the last as compared to the first run of both conditions. The study confirms that a combination of signals derived from the time and frequency domain of the electroencephalogram is promising for the online detection of workload and fatigue. It also demonstrates that satisfactory accuracies can be achieved by healthy participants with the P300 speller, despite constant distraction and when pursuing the task for a long time.

  8. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Simulation of SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin

    2009-11-01

    A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.

  9. Fermi Observations of GRB 090510: A Short-Hard Gamma-ray Burst with an Additional, Hard Power-law Component from 10 keV TO GeV Energies

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Asano, K.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Baring, M. G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Bhat, P. N.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bouvier, A.; Bregeon, J.; Brez, A.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Carrigan, S.; Casandjian, J. M.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Dermer, C. D.; de Palma, F.; Dingus, B. L.; Silva, E. do Couto e.; Drell, P. S.; Dubois, R.; Dumora, D.; Farnier, C.; Favuzzi, C.; Fegan, S. J.; Finke, J.; Focke, W. B.; Frailis, M.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giglietto, N.; Giordano, F.; Glanzman, T.; Godfrey, G.; Granot, J.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Harding, A. K.; Hays, E.; Horan, D.; Hughes, R. E.; Jóhannesson, G.; Johnson, W. N.; Kamae, T.; Katagiri, H.; Kataoka, J.; Kawai, N.; Kippen, R. M.; Knödlseder, J.; Kocevski, D.; Kouveliotou, C.; Kuss, M.; Lande, J.; Latronico, L.; Lemoine-Goumard, M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Makeev, A.; Mazziotta, M. N.; McEnery, J. E.; McGlynn, S.; Meegan, C.; Mészáros, P.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monte, C.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakajima, H.; Nakamori, T.; Nolan, P. L.; Norris, J. P.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paciesas, W. S.; Paneque, D.; Panetta, J. H.; Parent, D.; Pelassa, V.; Pepe, M.; Pesce-Rollins, M.; Piron, F.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Ritz, S.; Rodriguez, A. Y.; Roth, M.; Ryde, F.; Sadrozinski, H. F.-W.; Sander, A.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Siskind, E. J.; Smith, P. D.; Spandre, G.; Spinelli, P.; Stamatikos, M.; Stecker, F. W.; Strickman, M. S.; Suson, D. J.; Tajima, H.; Takahashi, H.; Takahashi, T.; Tanaka, T.; Thayer, J. B.; Thayer, J. G.; Thompson, D. J.; Tibaldo, L.; Toma, K.; Torres, D. F.; Tosti, G.; Tramacere, A.; Uchiyama, Y.; Uehara, T.; Usher, T. L.; van der Horst, A. J.; Vasileiou, V.; Vilchez, N.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wang, P.; Wilson-Hodge, C.; Winer, B. L.; Wu, X. F.; Yamazaki, R.; Yang, Z.; Ylinen, T.; Ziegler, M.

    2010-06-01

    We present detailed observations of the bright short-hard gamma-ray burst GRB 090510 made with the Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) on board the Fermi observatory. GRB 090510 is the first burst detected by the LAT that shows strong evidence for a deviation from a Band spectral fitting function during the prompt emission phase. The time-integrated spectrum is fit by the sum of a Band function with E peak = 3.9 ± 0.3 MeV, which is the highest yet measured, and a hard power-law component with photon index -1.62 ± 0.03 that dominates the emission below ≈20 keV and above ≈100 MeV. The onset of the high-energy spectral component appears to be delayed by ~0.1 s with respect to the onset of a component well fit with a single Band function. A faint GBM pulse and a LAT photon are detected 0.5 s before the main pulse. During the prompt phase, the LAT detected a photon with energy 30.5+5.8 -2.6 GeV, the highest ever measured from a short GRB. Observation of this photon sets a minimum bulk outflow Lorentz factor, Γgsim 1200, using simple γγ opacity arguments for this GRB at redshift z = 0.903 and a variability timescale on the order of tens of ms for the ≈100 keV-few MeV flux. Stricter high confidence estimates imply Γ >~ 1000 and still require that the outflows powering short GRBs are at least as highly relativistic as those of long-duration GRBs. Implications of the temporal behavior and power-law shape of the additional component on synchrotron/synchrotron self-Compton, external-shock synchrotron, and hadronic models are considered.

  10. Computer-assisted assignment of functional domains in the nonstructural polyprotein of hepatitis E virus: delineation of an additional group of positive-strand RNA plant and animal viruses.

    PubMed

    Koonin, E V; Gorbalenya, A E; Purdy, M A; Rozanov, M N; Reyes, G R; Bradley, D W

    1992-09-01

    Computer-assisted comparison of the nonstructural polyprotein of hepatitis E virus (HEV) with proteins of other positive-strand RNA viruses allowed the identification of the following putative functional domains: (i) RNA-dependent RNA polymerase, (ii) RNA helicase, (iii) methyltransferase, (iv) a domain of unknown function ("X" domain) flanking the papain-like protease domains in the polyproteins of animal positive-strand RNA viruses, and (v) papain-like cysteine protease domain distantly related to the putative papain-like protease of rubella virus (RubV). Comparative analysis of the polymerase and helicase sequences of positive-strand RNA viruses belonging to the so-called "alpha-like" supergroup revealed grouping between HEV, RubV, and beet necrotic yellow vein virus (BNYVV), a plant furovirus. Two additional domains have been identified: one showed significant conservation between HEV, RubV, and BNYVV, and the other showed conservation specifically between HEV and RubV. The large nonstructural proteins of HEV, RubV, and BNYVV retained similar domain organization, with the exceptions of relocation of the putative protease domain in HEV as compared to RubV and the absence of the protease and X domains in BNYVV. These observations show that HEV, RubV, and BNYVV encompass partially conserved arrays of distinctive putative functional domains, suggesting that these viruses constitute a distinct monophyletic group within the alpha-like supergroup of positive-strand RNA viruses. PMID:1518855

  11. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  12. Influence of signals length and noise in power spectral densities computation using Hilbert-Huang Transform in synthetic HRV

    NASA Astrophysics Data System (ADS)

    Rodríguez, María. G.; Altuve, Miguel; Lollett, Carlos; Wong, Sara

    2013-11-01

    Among non-invasive techniques, heart rate variability (HRV) analysis has become widely used for assessing the balance of the autonomic nervous system. Research in this area has not stopped and alternative tools for the study and interpretation of HRV, are still being proposed. Nevertheless, frequency-domain analysis of HRV is controversial when the heartbeat sequence is non-stationary. The Hilbert-Huang Transform (HHT) is a relative new technique for timefrequency analyses of non-linear and non-stationary signals. The main purpose of this work is to investigate the influence of time serieś length and noise in HRV from synthetic signals, using HHT and to compare it with Welch method. Synthetic heartbeat time series with different sizes and levels of signal to noise ratio (SNR) were investigated. Results shows i) sequencés length did not affect the estimation of HRV spectral parameter, ii) favorable performance for HHT for different SNR. Additionally, HHT can be applied to non-stationary signals from nonlinear systems and it will be useful to HRV analysis to interpret autonomic activity when acute and transient phenomena are assessed.

  13. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  14. Power management system

    DOEpatents

    Algrain, Marcelo C.; Johnson, Kris W.; Akasam, Sivaprasad; Hoff, Brian D.

    2007-10-02

    A method of managing power resources for an electrical system of a vehicle may include identifying enabled power sources from among a plurality of power sources in electrical communication with the electrical system and calculating a threshold power value for the enabled power sources. A total power load placed on the electrical system by one or more power consumers may be measured. If the total power load exceeds the threshold power value, then a determination may be made as to whether one or more additional power sources is available from among the plurality of power sources. At least one of the one or more additional power sources may be enabled, if available.

  15. Teaching Physics with Computers

    NASA Astrophysics Data System (ADS)

    Botet, R.; Trizac, E.

    2005-09-01

    Computers are now so common in our everyday life that it is difficult to imagine the computer-free scientific life of the years before the 1980s. And yet, in spite of an unquestionable rise, the use of computers in the realm of education is still in its infancy. This is not a problem with students: for the new generation, the pre-computer age seems as far in the past as the the age of the dinosaurs. It may instead be more a question of teacher attitude. Traditional education is based on centuries of polished concepts and equations, while computers require us to think differently about our method of teaching, and to revise the content accordingly. Our brains do not work in terms of numbers, but use abstract and visual concepts; hence, communication between computer and man boomed when computers escaped the world of numbers to reach a visual interface. From this time on, computers have generated new knowledge and, more importantly for teaching, new ways to grasp concepts. Therefore, just as real experiments were the starting point for theory, virtual experiments can be used to understand theoretical concepts. But there are important differences. Some of them are fundamental: a virtual experiment may allow for the exploration of length and time scales together with a level of microscopic complexity not directly accessible to conventional experiments. Others are practical: numerical experiments are completely safe, unlike some dangerous but essential laboratory experiments, and are often less expensive. Finally, some numerical approaches are suited only to teaching, as the concept necessary for the physical problem, or its solution, lies beyond the scope of traditional methods. For all these reasons, computers open physics courses to novel concepts, bringing education and research closer. In addition, and this is not a minor point, they respond naturally to the basic pedagogical needs of interactivity, feedback, and individualization of instruction. This is why one can

  16. Pulsar discovery by global volunteer computing.

    PubMed

    Knispel, B; Allen, B; Cordes, J M; Deneva, J S; Anderson, D; Aulbert, C; Bhat, N D R; Bock, O; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Crawford, F; Demorest, P B; Fehrmann, H; Freire, P C C; Gonzalez, M E; Hammer, D; Hessels, J W T; Jenet, F A; Kasian, L; Kaspi, V M; Kramer, M; Lazarus, P; van Leeuwen, J; Lorimer, D R; Lyne, A G; Machenschalk, B; McLaughlin, M A; Messenger, C; Nice, D J; Papa, M A; Pletsch, H J; Prix, R; Ransom, S M; Siemens, X; Stairs, I H; Stappers, B W; Stovall, K; Venkataraman, A

    2010-09-10

    Einstein@Home aggregates the computer power of hundreds of thousands of volunteers from 192 countries to mine large data sets. It has now found a 40.8-hertz isolated pulsar in radio survey data from the Arecibo Observatory taken in February 2007. Additional timing observations indicate that this pulsar is likely a disrupted recycled pulsar. PSR J2007+2722's pulse profile is remarkably wide with emission over almost the entire spin period; the pulsar likely has closely aligned magnetic and spin axes. The massive computing power provided by volunteers should enable many more such discoveries.

  17. SETI@home, BOINC, and Volunteer Distributed Computing

    NASA Astrophysics Data System (ADS)

    Korpela, Eric J.

    2012-05-01

    Volunteer computing, also known as public-resource computing, is a form of distributed computing that relies on members of the public donating the processing power, Internet connection, and storage capabilities of their home computers. Projects that utilize this mode of distributed computation can potentially access millions of Internet-attached central processing units (CPUs) that provide PFLOPS (thousands of trillions of floating-point operations per second) of processing power. In addition, these projects can access the talents of the volunteers themselves. Projects span a wide variety of domains including astronomy, biochemistry, climatology, physics, and mathematics. This review provides an introduction to volunteer computing and some of the difficulties involved in its implementation. I describe the dominant infrastructure for volunteer computing in some depth and provide descriptions of a small number of projects as an illustration of the variety of projects that can be undertaken.

  18. Parallel Analysis and Visualization on Cray Compute Node Linux

    SciTech Connect

    Pugmire, Dave; Ahern, Sean

    2008-01-01

    Capability computer systems are deployed to give researchers the computational power required to investigate and solve key challenges facing the scientific community. As the power of these computer systems increases, the computational problem domain typically increases in size, complexity and scope. These increases strain the ability of commodity analysis and visualization clusters to effectively perform post-processing tasks and provide critical insight and understanding to the computed results. An alternative to purchasing increasingly larger, separate analysis and visualization commodity clusters is to use the computational system itself to perform post-processing tasks. In this paper, the recent successful port of VisIt, a parallel, open source analysis and visualization tool, to compute node linux running on the Cray is detailed. Additionally, the unprecedented ability of this resource for analysis and visualization is discussed and a report on obtained results is presented.

  19. Demographic inferences using short-read genomic data in an approximate Bayesian computation framework: in silico evaluation of power, biases and proof of concept in Atlantic walrus.

    PubMed

    Shafer, Aaron B A; Gattepaille, Lucie M; Stewart, Robert E A; Wolf, Jochen B W

    2015-01-01

    Approximate Bayesian computation (ABC) is a powerful tool for model-based inference of demographic histories from large genetic data sets. For most organisms, its implementation has been hampered by the lack of sufficient genetic data. Genotyping-by-sequencing (GBS) provides cheap genome-scale data to fill this gap, but its potential has not fully been exploited. Here, we explored power, precision and biases of a coalescent-based ABC approach where GBS data were modelled with either a population mutation parameter (θ) or a fixed site (FS) approach, allowing single or several segregating sites per locus. With simulated data ranging from 500 to 50 000 loci, a variety of demographic models could be reliably inferred across a range of timescales and migration scenarios. Posterior estimates were informative with 1000 loci for migration and split time in simple population divergence models. In more complex models, posterior distributions were wide and almost reverted to the uninformative prior even with 50 000 loci. ABC parameter estimates, however, were generally more accurate than an alternative composite-likelihood method. Bottleneck scenarios proved particularly difficult, and only recent bottlenecks without recovery could be reliably detected and dated. Notably, minor-allele-frequency filters - usual practice for GBS data - negatively affected nearly all estimates. With this in mind, we used a combination of FS and θ approaches on empirical GBS data generated from the Atlantic walrus (Odobenus rosmarus rosmarus), collectively providing support for a population split before the last glacial maximum followed by asymmetrical migration and a high Arctic bottleneck. Overall, this study evaluates the potential and limitations of GBS data in an ABC-coalescence framework and proposes a best-practice approach. PMID:25482153

  20. Demographic inferences using short-read genomic data in an approximate Bayesian computation framework: in silico evaluation of power, biases and proof of concept in Atlantic walrus.

    PubMed

    Shafer, Aaron B A; Gattepaille, Lucie M; Stewart, Robert E A; Wolf, Jochen B W

    2015-01-01

    Approximate Bayesian computation (ABC) is a powerful tool for model-based inference of demographic histories from large genetic data sets. For most organisms, its implementation has been hampered by the lack of sufficient genetic data. Genotyping-by-sequencing (GBS) provides cheap genome-scale data to fill this gap, but its potential has not fully been exploited. Here, we explored power, precision and biases of a coalescent-based ABC approach where GBS data were modelled with either a population mutation parameter (θ) or a fixed site (FS) approach, allowing single or several segregating sites per locus. With simulated data ranging from 500 to 50 000 loci, a variety of demographic models could be reliably inferred across a range of timescales and migration scenarios. Posterior estimates were informative with 1000 loci for migration and split time in simple population divergence models. In more complex models, posterior distributions were wide and almost reverted to the uninformative prior even with 50 000 loci. ABC parameter estimates, however, were generally more accurate than an alternative composite-likelihood method. Bottleneck scenarios proved particularly difficult, and only recent bottlenecks without recovery could be reliably detected and dated. Notably, minor-allele-frequency filters - usual practice for GBS data - negatively affected nearly all estimates. With this in mind, we used a combination of FS and θ approaches on empirical GBS data generated from the Atlantic walrus (Odobenus rosmarus rosmarus), collectively providing support for a population split before the last glacial maximum followed by asymmetrical migration and a high Arctic bottleneck. Overall, this study evaluates the potential and limitations of GBS data in an ABC-coalescence framework and proposes a best-practice approach.

  1. Computation Directorate 2008 Annual Report

    SciTech Connect

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

  2. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also

  3. The computational-and-experimental investigation into the head-flow characteristic of the two-stage ejector for the emergency core cooling system of the NPP with a water-moderated water-cooled power reactor

    NASA Astrophysics Data System (ADS)

    Parfenov, Yu. V.

    2013-09-01

    The results of the computational-and-experimental investigation into the two-stage ejector for the emergency cooling system of the core of the water-moderated water-cooled power reactor. The results of experimental investigations performed for the ejector model at the JSC "EREC" and the result of calculations performed using the REMIX CFD code are presented.

  4. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  5. Carbon nanotube computer.

    PubMed

    Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish

    2013-09-26

    The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.

  6. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  7. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  8. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY... attachment for TVA Power Securities in Book-entry System. The interest of a debtor in a Security...

  9. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY... attachment for TVA Power Securities in Book-entry System. The interest of a debtor in a Security...

  10. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY... attachment for TVA Power Securities in Book-entry System. The interest of a debtor in a Security...

  11. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY... attachment for TVA Power Securities in Book-entry System. The interest of a debtor in a Security...

  12. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY... attachment for TVA Power Securities in Book-entry System. The interest of a debtor in a Security...

  13. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  14. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  15. An iron–oxygen intermediate formed during the catalytic cycle of cysteine dioxygenase† †Electronic supplementary information (ESI) available: Experimental and computational details. See DOI: 10.1039/c6cc03904a Click here for additional data file.

    PubMed Central

    Tchesnokov, E. P.; Faponle, A. S.; Davies, C. G.; Quesne, M. G.; Turner, R.; Fellner, M.; Souness, R. J.; Wilbanks, S. M.

    2016-01-01

    Cysteine dioxygenase is a key enzyme in the breakdown of cysteine, but its mechanism remains controversial. A combination of spectroscopic and computational studies provides the first evidence of a short-lived intermediate in the catalytic cycle. The intermediate decays within 20 ms and has absorption maxima at 500 and 640 nm. PMID:27297454

  16. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications.

  17. A low power Multi-Channel Analyzer

    SciTech Connect

    Anderson, G.A.; Brackenbush, L.W.

    1993-06-01

    The instrumentation used in nuclear spectroscopy is generally large, is not portable, and requires a lot of power. Key components of these counting systems are the computer and the Multi-Channel Analyzer (MCA). To assist in performing measurements requiring portable systems, a small, very low power MCA has been developed at Pacific Northwest Laboratory (PNL). This MCA is interfaced with a Hewlett Packard palm top computer for portable applications. The MCA can also be connected to an IBM/PC for data storage and analysis. In addition, a real-time time display mode allows the user to view the spectra as they are collected.

  18. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals.

    PubMed

    Chandran K S, Subhash; Mishra, Ashutosh; Shirhatti, Vinay; Ray, Supratim

    2016-03-23

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. PMID:27013668

  19. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals.

    PubMed

    Chandran K S, Subhash; Mishra, Ashutosh; Shirhatti, Vinay; Ray, Supratim

    2016-03-23

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided.

  20. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals

    PubMed Central

    Chandran KS, Subhash; Mishra, Ashutosh; Shirhatti, Vinay

    2016-01-01

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. PMID:27013668

  1. Power system

    DOEpatents

    Hickam, Christopher Dale

    2008-03-18

    A power system includes a prime mover, a transmission, and a fluid coupler having a selectively engageable lockup clutch. The fluid coupler may be drivingly connected between the prime mover and the transmission. Additionally, the power system may include a motor/generator drivingly connected to at least one of the prime mover and the transmission. The power-system may also include power-system controls configured to execute a control method. The control method may include selecting one of a plurality of modes of operation of the power system. Additionally, the control method may include controlling the operating state of the lockup clutch dependent upon the mode of operation selected. The control method may also include controlling the operating state of the motor/generator dependent upon the mode of operation selected.

  2. Electric power exchanges with sensitivity matrices: an experimental analysis

    SciTech Connect

    Drozdal, Martin

    2001-01-01

    We describe a fast and incremental method for power flows computation. Fast in the sense that it can be used for real time power flows computation, and incremental in the sense that it computes any additional increase/decrease in line congestion caused by a particular contract. This is, to our best knowledge, the only suitable method for real time power flows computation, that at the same time offers a powerful way of dealing with congestion contingency. Many methods for this purpose have been designed, or thought of, but those either lack speed or being incremental, or have never been coded and tested. The author is in the process of obtaining a patent on methods, algorithms, and procedures described in this paper.

  3. Computer viruses

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  4. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  5. Argonne's Laboratory computing center - 2007 annual report.

    SciTech Connect

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

  6. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  7. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  8. Introduction to Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ekert, A.

    A computation is a physical process. It may be performed by a piece of electronics or on an abacus, or in your brain, but it is a process that takes place in nature and as such it is subject to the laws of physics. Quantum computers are machines that rely on characteristically quantum phenomena, such as quantum interference and quantum entanglement in order to perform computation. In this series of lectures I want to elaborate on the computational power of such machines.

  9. Measured energy savings of an energy-efficient office computer system

    SciTech Connect

    Lapujade, P.G.

    1995-12-01

    Recent surveys have shown that the use of personal computer systems in commercial office buildings is expanding rapidly. In warmer climates, office equipment energy use also has important implications for building cooling loads as well as those directly associated with computing tasks. The U.S. Environmental Protection Agency (EPA) has developed the Energy Star (ES) rating system, intended to endorse more efficient machines. To research the comparative performance of conventional and low-energy computer systems, a test was conducted with the substitution of an ES computer system for the main clerical computer used at a research institution. Separate data on power demand (watts), power factor for the computer/monitor, and power demand for the dedicated laser printer were recorded every 15 minutes to a multichannel datalogger. The current system, a 486DX, 66 MHz computer (8 MB of RAM, and 340 MB hard disk) with a laser printer was monitored for an 86-day period. An ES computer and an ES printer with virtually identical capabilities were then substituted and the changes to power demand and power factor were recorded for an additional 86 days. Computer and printer usage patterns remained essentially constant over the entire monitoring period. The computer user was also interviewed to learn of any perceived shortcomings of the more energy-efficient system. Based on the monitoring, the ES computer system is calculated to produce energy savings of 25.8% (121 kWh) over one year.

  10. Low-Power Public Key Cryptography

    SciTech Connect

    BEAVER,CHERYL L.; DRAELOS,TIMOTHY J.; HAMILTON,VICTORIA A.; SCHROEPPEL,RICHARD C.; GONZALES,RITA A.; MILLER,RUSSELL D.; THOMAS,EDWARD V.

    2000-11-01

    This report presents research on public key, digital signature algorithms for cryptographic authentication in low-powered, low-computation environments. We assessed algorithms for suitability based on their signature size, and computation and storage requirements. We evaluated a variety of general purpose and special purpose computing platforms to address issues such as memory, voltage requirements, and special functionality for low-powered applications. In addition, we examined custom design platforms. We found that a custom design offers the most flexibility and can be optimized for specific algorithms. Furthermore, the entire platform can exist on a single Application Specific Integrated Circuit (ASIC) or can be integrated with commercially available components to produce the desired computing platform.

  11. Fast algorithm for computing a primitive /2 to power p + 1/p-th root of unity in GF/q squared/

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1978-01-01

    A quick method is described for finding the primitive (2 to power p + 1)p-th root of unity in the Galois field GF(q squared), where q = (2 to power p) - 1 and is known as a Mersenne prime. Determination of this root is necessary to implement complex integer transforms of length (2 to power k) times p over the Galois field, with k varying between 3 and p + 1.

  12. Multichannel Phase and Power Detector

    NASA Technical Reports Server (NTRS)

    Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy

    2006-01-01

    An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals

  13. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  14. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  15. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  16. Proposal for grid computing for nuclear applications

    SciTech Connect

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  17. Proposal for grid computing for nuclear applications

    NASA Astrophysics Data System (ADS)

    Idris, Faridah Mohamad; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; Ali, Mohd Adli bin Md; Mohamed, Abdul Aziz; Ismail, Roslan; Ahmad, Abdul Rahim; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat @; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Sjaugi, Farhan

    2014-02-01

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  18. Impact of Classroom Computer Use on Computer Anxiety.

    ERIC Educational Resources Information Center

    Lambert, Matthew E.; And Others

    Increasing use of computer programs for undergraduate psychology education has raised concern over the impact of computer anxiety on educational performance. Additionally, some researchers have indicated that classroom computer use can exacerbate pre-existing computer anxiety. To evaluate the relationship between in-class computer use and computer…

  19. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  20. A simple model for neural computation with firing rates and firing correlations.

    PubMed

    Maass, W

    1998-08-01

    A simple extension of standard neural network models is introduced which provides a model for neural computations that involve both firing rates and firing correlations. Such an extension appears to be useful since it has been shown that firing correlations play a significant computational role in many biological neural systems. Standard neural network models are only suitable for describing neural computations in terms of firing rates. The resulting extended neural network models are still relatively simple, so that their computational power can be analysed theoretically. We prove rigorous separation results, which show that the use of firing correlations in addition to firing rates can drastically increase the computational power of a neural network. Furthermore, one of our separation results also throws new light on a question that involves just standard neural network models: we prove that the gap between the computational power of high-order and first-order neural nets is substantially larger than shown previously.