Science.gov

Sample records for additional computing power

  1. Calculators and Computers: Graphical Addition.

    ERIC Educational Resources Information Center

    Spero, Samuel W.

    1978-01-01

    A computer program is presented that generates problem sets involving sketching graphs of trigonometric functions using graphical addition. The students use calculators to sketch the graphs and a computer solution is used to check it. (MP)

  2. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  3. Power consumption monitoring using additional monitoring device

    SciTech Connect

    Truşcă, M. R. C. Albert, Ş. Tudoran, C. Soran, M. L. Fărcaş, F.; Abrudean, M.

    2013-11-13

    Today, emphasis is placed on reducing power consumption. Computers are large consumers; therefore it is important to know the total consumption of computing systems. Since their optimal functioning requires quite strict environmental conditions, without much variation in temperature and humidity, reducing energy consumption cannot be made without monitoring environmental parameters. Thus, the present work uses a multifunctional electric meter UPT 210 for power consumption monitoring. Two applications were developed: software which carries meter readings provided by electronic and programming facilitates remote device and a device for temperature monitoring and control. Following temperature variations that occur both in the cooling system, as well as the ambient, can reduce energy consumption. For this purpose, some air conditioning units or some computers are stopped in different time slots. These intervals were set so that the economy is high, but the work's Datacenter is not disturbed.

  4. World's Most Powerful Computer

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The use of the Cray 2 supercomputer, the fastest computer in the world, at ARC is detailed. The Cray 2 can perform 250 million calculations per second and has 10 times the memory of any other computer. Ames researchers are shown creating computer simulations of aircraft airflow, waterflow around a submarine, and fuel flow inside of the Space Shuttle's engines. The video also details the Cray 2's use in calculating airflow around the Shuttle and its external rockets during liftoff for the first time and in the development of the National Aero Space Plane.

  5. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  6. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  7. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  8. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  9. Computed tomography characterisation of additive manufacturing materials.

    PubMed

    Bibb, Richard; Thompson, Darren; Winder, John

    2011-06-01

    Additive manufacturing, covering processes frequently referred to as rapid prototyping and rapid manufacturing, provides new opportunities in the manufacture of highly complex and custom-fitting medical devices and products. Whilst many medical applications of AM have been explored and physical properties of the resulting parts have been studied, the characterisation of AM materials in computed tomography has not been explored. The aim of this study was to determine the CT number of commonly used AM materials. There are many potential applications of the information resulting from this study in the design and manufacture of wearable medical devices, implants, prostheses and medical imaging test phantoms. A selection of 19 AM material samples were CT scanned and the resultant images analysed to ascertain the materials' CT number and appearance in the images. It was found that some AM materials have CT numbers very similar to human tissues, FDM, SLA and SLS produce samples that appear uniform on CT images and that 3D printed materials show a variation in internal structure.

  10. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  11. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  12. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  13. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  14. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  15. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  16. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information....

  17. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  18. Children, Computers, and Powerful Ideas

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Today it is commonplace that computers and technology permeate almost every aspect of education. In the late 1960s, though, the idea that computers could serve as a catalyst for thinking about the way children learn was a radical concept. In the early 1960s, Seymour Papert joined the faculty of MIT and founded the Artificial Intelligence Lab with…

  19. Exploring human inactivity in computer power consumption

    NASA Astrophysics Data System (ADS)

    Candrawati, Ria; Hashim, Nor Laily Binti

    2016-08-01

    Managing computer power consumption has become an important challenge in computer society and this is consistent with a trend where a computer system is more important to modern life together with a request for increased computing power and functions continuously. Unfortunately, previous approaches are still inadequately designed to handle the power consumption problem due to unpredictable workload of a system caused by unpredictable human behaviors. This is happens due to lack of knowledge in a software system and the software self-adaptation is one approach in dealing with this source of uncertainty. Human inactivity is handled by adapting the behavioral changes of the users. This paper observes human inactivity in the computer usage and finds that computer power usage can be reduced if the idle period can be intelligently sensed from the user activities. This study introduces Control, Learn and Knowledge model that adapts the Monitor, Analyze, Planning, Execute control loop integrates with Q Learning algorithm to learn human inactivity period to minimize the computer power consumption. An experiment to evaluate this model was conducted using three case studies with same activities. The result show that the proposed model obtained those 5 out of 12 activities shows the power decreasing compared to others.

  20. Computing aspects of power for multiple regression.

    PubMed

    Dunlap, William P; Xin, Xue; Myers, Leann

    2004-11-01

    Rules of thumb for power in multiple regression research abound. Most such rules dictate the necessary sample size, but they are based only upon the number of predictor variables, usually ignoring other critical factors necessary to compute power accurately. Other guides to power in multiple regression typically use approximate rather than precise equations for the underlying distribution; entail complex preparatory computations; require interpolation with tabular presentation formats; run only under software such as Mathmatica or SAS that may not be immediately available to the user; or are sold to the user as parts of power computation packages. In contrast, the program we offer herein is immediately downloadable at no charge, runs under Windows, is interactive, self-explanatory, flexible to fit the user's own regression problems, and is as accurate as single precision computation ordinarily permits.

  1. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  2. 12. POWER PLANT PART OF BUILDING SHOWING RELATION TO ADDITION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. POWER PLANT PART OF BUILDING SHOWING RELATION TO ADDITION AND EQUIPMENT PART OF BUILDING - Boswell Bay White Alice Site, Radio Relay Building, Chugach National Forest, Cordova, Valdez-Cordova Census Area, AK

  3. Software Support for Transiently Powered Computers

    SciTech Connect

    Van Der Woude, Joel Matthew

    2015-06-01

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate the correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.

  4. Computational Power of Quantum Machines, Quantum Grammars and Feasible Computation

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, E. V.

    This paper studies the computational power of quantum computers to explore as to whether they can recognize properties which are in nondeterministic polynomial-time class (NP) and beyond. To study the computational power, we use the Feynman's path integral (FPI) formulation of quantum mechanics. From a computational point of view the Feynman's path integral computes a quantum dynamical analogue of the k-ary relation computed by an Alternating Turing machine (ATM) using AND-OR Parallelism. Hence, if we can find a suitable mapping function between an instance of a mathematical problem and the corresponding interference problem, using suitable potential functions for which FPI can be integrated exactly, the computational power of a quantum computer can be bounded to that of an alternating Turing machine that can solve problems in NP (e.g, factorization problem) and in polynomial space. Unfortunately, FPI is exactly integrable only for a few problems (e.g., the harmonic oscillator) involving quadratic potentials; otherwise, they may be only approximately computable or noncomputable. This means we cannot in general solve all quantum dynamical problems exactly except for those special cases of quadratic potentials, e.g., harmonic oscillator. Since there is a one to one correspondence between the quantum mechanical problems that can be analytically solved and the path integrals that can be exactly evaluated, we can say that the noncomputability of FPI implies quantum unsolvability. This is the analogue of classical unsolvability. The Feynman's path graph can be considered as a semantic parse graph for the quantum mechanical sentence. It provides a semantic valuation function of the terminal sentence based on probability amplitudes to disambiguate a given quantum description and obtain an interpretation in a linear time. In Feynman's path integral, the kernels are partially ordered over time (different alternate paths acting concurrently at the same time) and multiplied

  5. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  6. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  7. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  8. Advanced Computational Techniques for Power Tube Design.

    DTIC Science & Technology

    1986-07-01

    fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

  9. 3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 4. Plan no. 10,548. Scale 1/4 inch to the foot, elevations, and one inch to the foot, sections and details. April 30, 1945, last revised 6/19/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  10. 4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. Also includes plot plan at 1 inch to 100 feet. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 3. Plan no. 10,548. Scale 1/4 inch and h inch to the foot. April 30, 1945, last revised 6/22/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  11. Power of surface-based DNA computation

    SciTech Connect

    Cai, Weiping; Condon, A.E.; Corn, R.M.

    1997-12-01

    A new model of DNA computation that is based on surface chemistry is studied. Such computations involve the manipulation of DNA strands that are immobilized on a surface, rather than in solution as in the work of Adleman. Surface-based chemistry has been a critical technology in many recent advances in biochemistry and offers several advantages over solution-based chemistry, including simplified handling of samples and elimination of loss of strands, which reduce error in the computation. The main contribution of this paper is in showing that in principle, surface-based DNA chemistry can efficiently support general circuit computation on many inputs in parallel. To do this, an abstract model of computation that allows parallel manipulation of binary inputs is described. It is then shown that this model can be implemented by encoding inputs as DNA strands and repeatedly modifying the strands in parallel on a surface, using the chemical processes of hybridization, exonuclease degradation, polymerase extension, and ligation. Thirdly, it is shown that the model supports efficient circuit simulation in the following sense: exactly those inputs that satisfy a circuit can be isolated and the number of parallel operations needed to do this is proportional to the size of the circuit. Finally, results are presented on the power of the model when another resource of DNA computation is limited, namely strand length. 12 refs.

  12. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  13. Lithium Dinitramide as an Additive in Lithium Power Cells

    NASA Technical Reports Server (NTRS)

    Gorkovenko, Alexander A.

    2007-01-01

    Lithium dinitramide, LiN(NO2)2 has shown promise as an additive to nonaqueous electrolytes in rechargeable and non-rechargeable lithium-ion-based electrochemical power cells. Such non-aqueous electrolytes consist of lithium salts dissolved in mixtures of organic ethers, esters, carbonates, or acetals. The benefits of adding lithium dinitramide (which is also a lithium salt) include lower irreversible loss of capacity on the first charge/discharge cycle, higher cycle life, lower self-discharge, greater flexibility in selection of electrolyte solvents, and greater charge capacity. The need for a suitable electrolyte additive arises as follows: The metallic lithium in the anode of a lithium-ion-based power cell is so highly reactive that in addition to the desired main electrochemical reaction, it engages in side reactions that cause formation of resistive films and dendrites, which degrade performance as quantified in terms of charge capacity, cycle life, shelf life, first-cycle irreversible capacity loss, specific power, and specific energy. The incidence of side reactions can be reduced through the formation of a solid-electrolyte interface (SEI) a thin film that prevents direct contact between the lithium anode material and the electrolyte. Ideally, an SEI should chemically protect the anode and the electrolyte from each other while exhibiting high conductivity for lithium ions and little or no conductivity for electrons. A suitable additive can act as an SEI promoter. Heretofore, most SEI promotion was thought to derive from organic molecules in electrolyte solutions. In contrast, lithium dinitramide is inorganic. Dinitramide compounds are known as oxidizers in rocket-fuel chemistry and until now, were not known as SEI promoters in battery chemistry. Although the exact reason for the improvement afforded by the addition of lithium dinitramide is not clear, it has been hypothesized that lithium dinitramide competes with other electrolyte constituents to react with

  14. Computer Simulation of Auxiliary Power Systems.

    DTIC Science & Technology

    1980-03-01

    reverse side if necessary and iden~ffy by block number) gas turbine engine turbine engine computer programs auxiliary power unit aircraft engine starter ,i...printed to that effect . d. Turbines There are three choices for the turbine configuration, see Figure 2: 1) a one-stage turbine, 2) a two-stage turbine...07000 MAIN CO!RBUSTION EFF = .99500 DESIGN FUEL FLOW (LB/IHR) 150.00 MAIN COMB FUEL HEATING VALUE AT T4 FOR JP4 * 18400. COMB DISCHARGE TEMP

  15. Power-law distributions from additive preferential redistributions

    NASA Astrophysics Data System (ADS)

    Ree, Suhan

    2006-02-01

    We introduce a nongrowth model that generates the power-law distribution with the Zipf exponent. There are N elements, each of which is characterized by a quantity, and at each time step these quantities are redistributed through binary random interactions with a simple additive preferential rule, while the sum of quantities is conserved. The situation described by this model is similar to those of closed N -particle systems when conservative two-body collisions are only allowed. We obtain stationary distributions of these quantities both analytically and numerically while varying parameters of the model, and find that the model exhibits the scaling behavior for some parameter ranges. Unlike well-known growth models, this alternative mechanism generates the power-law distribution when the growth is not expected and the dynamics of the system is based on interactions between elements. This model can be applied to some examples such as personal wealths, city sizes, and the generation of scale-free networks when only rewiring is allowed.

  16. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  17. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  18. Computer program analyzes and monitors electrical power systems (POSIMO)

    NASA Technical Reports Server (NTRS)

    Jaeger, K.

    1972-01-01

    Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.

  19. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  20. 18 CFR 385.705 - Additional powers of presiding officer with respect to briefs (Rule 705).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional powers of presiding officer with respect to briefs (Rule 705). 385.705 Section 385.705 Conservation of Power and Water... PROCEDURE Decisions § 385.705 Additional powers of presiding officer with respect to briefs (Rule 705)....

  1. 18 CFR 385.705 - Additional powers of presiding officer with respect to briefs (Rule 705).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional powers of presiding officer with respect to briefs (Rule 705). 385.705 Section 385.705 Conservation of Power and Water... PROCEDURE Decisions § 385.705 Additional powers of presiding officer with respect to briefs (Rule 705)....

  2. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  3. Computational Simulation of Explosively Generated Pulsed Power Devices

    DTIC Science & Technology

    2013-03-21

    COMPUTATIONAL SIMULATION OF EXPLOSIVELY GENERATED PULSED POWER DEVICES THESIS Mollie C. Drumm, Captain, USAF AFIT-ENY-13-M-11 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENY-13-M-11 COMPUTATIONAL SIMULATION OF EXPLOSIVELY GENERATED PULSED POWER DEVICES THESIS Presented to the...OF EXPLOSIVELY GENERATED PULSED POWER DEVICES Mollie C. Drumm, BS Captain, USAF Approved: Dr. Robert B. Greendyke (Chairman) Date Capt. David Liu

  4. Computing and cognition in future power-plant operations

    SciTech Connect

    Kisner, R.A.; Sheridan, T.B.

    1983-01-01

    The intent of this paper is to speculate on the nature of future interactions between people and computers in the operation of power plants. In particular, the authors offer a taxonomy for examining the differing functions of operators in interacting with the plant and its computers, and the differing functions of the computers in interacting with the plant and its operators.

  5. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  6. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  7. Computational calculation of equilibrium constants: addition to carbonyl compounds.

    PubMed

    Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio

    2009-10-22

    Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within +/- 0.5 log K(hyd) units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than +/- 1.0 log K(hyd), on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary.

  8. System and method for high power diode based additive manufacturing

    DOEpatents

    El-Dasher, Bassem S.; Bayramian, Andrew; Demuth, James A.; Farmer, Joseph C.; Torres, Sharon G.

    2016-04-12

    A system is disclosed for performing an Additive Manufacturing (AM) fabrication process on a powdered material forming a substrate. The system may make use of a diode array for generating an optical signal sufficient to melt a powdered material of the substrate. A mask may be used for preventing a first predetermined portion of the optical signal from reaching the substrate, while allowing a second predetermined portion to reach the substrate. At least one processor may be used for controlling an output of the diode array.

  9. On source radiation. [power output computation

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1980-01-01

    The power output from given sources is usually ascertained via an energy flux integral over the normal directions to a remote (farfield) surface; an alternative procedure, which utilizes an integral that specifies the direct rate of working by the source on the resultant field, is described and illustrated for both point and continuous source distributions. A comparison between the respective procedures is made in the analysis of sound radiated from a periodic dipole source whose axis rotates in a plane, on a full or partial angular range, with prescribed frequency. Thus, adopting a conventional approach, Sretenskii (1956) characterizes the rotating dipole in terms of an infinite number of stationary ones along a pair of orthogonal directions in the plane and, through the farfield representation of the latter, arrives at a series development for the instantaneous radiated power, whereas the local manner of power calculation dispenses with the equivalent infinite aggregate of sources and yields a compact analytical result.

  10. DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS (REAR), ROOM 8A - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  11. Computer Controlled MHD Power Consolidation and Pulse Generation System

    DTIC Science & Technology

    2007-11-02

    4465 Publication Date: Aug 01,1990 Title: Computer Controlled MHD Power Consolidation and Pulse Generation System Personal Author: Johnson, R...of Copies In Library: 000001 Record ID: 26725 : Computer Controlled MHD Power Consolidation and Pulse Generation System Final Technical Progress...Four-pulse CI System For A Diagonally Connected MHD Generator 14 9 Diagonal Output Voltage for Rsource =10 ohms, Rload = 1 ohm 16 10 Diagonal

  12. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  13. Controlling High Power Devices with Computers or TTL Logic Circuits

    ERIC Educational Resources Information Center

    Carlton, Kevin

    2002-01-01

    Computers are routinely used to control experiments in modern science laboratories. This should be reflected in laboratories in an educational setting. There is a mismatch between the power that can be delivered by a computer interfacing card or a TTL logic circuit and that required by many practical pieces of laboratory equipment. One common way…

  14. The computational power of astrocyte mediated synaptic plasticity

    PubMed Central

    Min, Rogier; Santello, Mirko; Nevian, Thomas

    2012-01-01

    Research in the last two decades has made clear that astrocytes play a crucial role in the brain beyond their functions in energy metabolism and homeostasis. Many studies have shown that astrocytes can dynamically modulate neuronal excitability and synaptic plasticity, and might participate in higher brain functions like learning and memory. With the plethora of astrocyte mediated signaling processes described in the literature today, the current challenge is to identify, which of these processes happen under what physiological condition, and how this shapes information processing and, ultimately, behavior. To answer these questions will require a combination of advanced physiological, genetical, and behavioral experiments. Additionally, mathematical modeling will prove crucial for testing predictions on the possible functions of astrocytes in neuronal networks, and to generate novel ideas as to how astrocytes can contribute to the complexity of the brain. Here, we aim to provide an outline of how astrocytes can interact with neurons. We do this by reviewing recent experimental literature on astrocyte-neuron interactions, discussing the dynamic effects of astrocytes on neuronal excitability and short- and long-term synaptic plasticity. Finally, we will outline the potential computational functions that astrocyte-neuron interactions can serve in the brain. We will discuss how astrocytes could govern metaplasticity in the brain, how they might organize the clustering of synaptic inputs, and how they could function as memory elements for neuronal activity. We conclude that astrocytes can enhance the computational power of neuronal networks in previously unexpected ways. PMID:23125832

  15. The computational power of astrocyte mediated synaptic plasticity.

    PubMed

    Min, Rogier; Santello, Mirko; Nevian, Thomas

    2012-01-01

    Research in the last two decades has made clear that astrocytes play a crucial role in the brain beyond their functions in energy metabolism and homeostasis. Many studies have shown that astrocytes can dynamically modulate neuronal excitability and synaptic plasticity, and might participate in higher brain functions like learning and memory. With the plethora of astrocyte mediated signaling processes described in the literature today, the current challenge is to identify, which of these processes happen under what physiological condition, and how this shapes information processing and, ultimately, behavior. To answer these questions will require a combination of advanced physiological, genetical, and behavioral experiments. Additionally, mathematical modeling will prove crucial for testing predictions on the possible functions of astrocytes in neuronal networks, and to generate novel ideas as to how astrocytes can contribute to the complexity of the brain. Here, we aim to provide an outline of how astrocytes can interact with neurons. We do this by reviewing recent experimental literature on astrocyte-neuron interactions, discussing the dynamic effects of astrocytes on neuronal excitability and short- and long-term synaptic plasticity. Finally, we will outline the potential computational functions that astrocyte-neuron interactions can serve in the brain. We will discuss how astrocytes could govern metaplasticity in the brain, how they might organize the clustering of synaptic inputs, and how they could function as memory elements for neuronal activity. We conclude that astrocytes can enhance the computational power of neuronal networks in previously unexpected ways.

  16. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  17. A Computational Workbench Environment For Virtual Power Plant Simulation

    SciTech Connect

    Bockelie, Michael J.; Swensen, David A.; Denison, Martin K.; Sarofim, Adel F.

    2001-11-06

    In this paper we describe our progress toward creating a computational workbench for performing virtual simulations of Vision 21 power plants. The workbench provides a framework for incorporating a full complement of models, ranging from simple heat/mass balance reactor models that run in minutes to detailed models that can require several hours to execute. The workbench is being developed using the SCIRun software system. To leverage a broad range of visualization tools the OpenDX visualization package has been interfaced to the workbench. In Year One our efforts have focused on developing a prototype workbench for a conventional pulverized coal fired power plant. The prototype workbench uses a CFD model for the radiant furnace box and reactor models for downstream equipment. In Year Two and Year Three, the focus of the project will be on creating models for gasifier based systems and implementing these models into an improved workbench. In this paper we describe our work effort for Year One and outline our plans for future work. We discuss the models included in the prototype workbench and the software design issues that have been addressed to incorporate such a diverse range of models into a single software environment. In addition, we highlight our plans for developing the energyplex based workbench that will be developed in Year Two and Year Three.

  18. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

    2004-01-28

    This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

  19. Future computing platforms for science in a power constrained era

    DOE PAGES

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...

    2015-12-23

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less

  20. Future computing platforms for science in a power constrained era

    SciTech Connect

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-23

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  1. Future Computing Platforms for Science in a Power Constrained Era

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  2. Harmonic analysis of spacecraft power systems using a personal computer

    NASA Technical Reports Server (NTRS)

    Williamson, Frank; Sheble, Gerald B.

    1989-01-01

    The effects that nonlinear devices such as ac/dc converters, HVDC transmission links, and motor drives have on spacecraft power systems are discussed. The nonsinusoidal currents, along with the corresponding voltages, are calculated by a harmonic power flow which decouples and solves for each harmonic component individually using an iterative Newton-Raphson algorithm. The sparsity of the harmonic equations and the overall Jacobian matrix is used to an advantage in terms of saving computer memory space and in terms of reducing computation time. The algorithm could also be modified to analyze each harmonic separately instead of all at the same time.

  3. Computer controlled MHD power consolidation and pulse generation system

    SciTech Connect

    Johnson, R.; Marcotte, K.; Donnelly, M.

    1990-01-01

    The major goal of this research project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility has been established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a Faraday connected MHD generator which may be viewed as a multi-terminal dc source and is simulated for the purpose of this demonstration by a set of dc power supplies. This consolidation/inversion (CI), process will be referred to subsequently as Pulse Amplitude Synthesis and Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible phase II prototype system. This report period work summarizes the accomplishments and covers the high points of the two year project. 6 refs., 41 figs.

  4. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  5. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  6. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    2006-09-01

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  7. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  8. Jaguar: The World?s Most Powerful Computer

    SciTech Connect

    Bland, Arthur S Buddy; Rogers, James H; Kendall, Ricky A; Kothe, Douglas B; Shipman, Galen M

    2009-01-01

    The Cray XT system at ORNL is the world s most powerful computer with several applications exceeding one-petaflops performance. This paper describes the architecture of Jaguar with combined XT4 and XT5 nodes along with an external Lustre file system and external login nodes. We also present some early results from Jaguar.

  9. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  10. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  11. Design of Power Quality Monitor Based on Embedded Industrial Computer

    NASA Astrophysics Data System (ADS)

    Junfeng, Huang; Hao, Sun; Xiaolin, Wei

    A design about electric power quality monitor device based on embedded industrial computer was proposed. On this basis, we introduced the framework and arithmetic of the device. Because of the existence of the harmonic disturbance, a scheme of adding windows combined with interpolation arithmetic was used to promote the detection accuracy; In the meanwhile, by means of the programming tool of Delphi, a good interface was designed. Through the experiment, we justify the device shows the well reliability and practicability.

  12. Li-ion synaptic transistor for low power analog computing

    SciTech Connect

    Fuller, Elliot J.; Gabaly, Farid El; Leonard, Francois; Agarwal, Sapan; Plimpton, Steven J.; Jacobs-Gedrim, Robin B.; James, Conrad D.; Marinella, Matthew J.; Talin, Albert Alec

    2016-11-22

    Nonvolatile redox transistors (NVRTs) based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, “write” linearity, low-voltage switching, and low power dissipation. Simulations of back propagation using the device properties reach ideal classification accuracy. Finally, physics-based simulations predict energy costs per “write” operation of <10 aJ when scaled to 200 nm × 200 nm.

  13. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  14. Rotating Detonation Combustion: A Computational Study for Stationary Power Generation

    NASA Astrophysics Data System (ADS)

    Escobar, Sergio

    The increased availability of gaseous fossil fuels in The US has led to the substantial growth of stationary Gas Turbine (GT) usage for electrical power generation. In fact, from 2013 to 2104, out of the 11 Tera Watts-hour per day produced from fossil fuels, approximately 27% was generated through the combustion of natural gas in stationary GT. The thermodynamic efficiency for simple-cycle GT has increased from 20% to 40% during the last six decades, mainly due to research and development in the fields of combustion science, material science and machine design. However, additional improvements have become more costly and more difficult to obtain as technology is further refined. An alternative to improve GT thermal efficiency is the implementation of a combustion regime leading to pressure-gain; rather than pressure loss across the combustor. One concept being considered for such purpose is Rotating Detonation Combustion (RDC). RDC refers to a combustion regime in which a detonation wave propagates continuously in the azimuthal direction of a cylindrical annular chamber. In RDC, the fuel and oxidizer, injected from separated streams, are mixed near the injection plane and are then consumed by the detonation front traveling inside the annular gap of the combustion chamber. The detonation products then expand in the azimuthal and axial direction away from the detonation front and exit through the combustion chamber outlet. In the present study Computational Fluid Dynamics (CFD) is used to predict the performance of Rotating Detonation Combustion (RDC) at operating conditions relevant to GT applications. As part of this study, a modeling strategy for RDC simulations was developed. The validation of the model was performed using benchmark cases with different levels of complexity. First, 2D simulations of non-reactive shock tube and detonation tubes were performed. The numerical predictions that were obtained using different modeling parameters were compared with

  15. Influence of edge additions on the synchronizability of oscillatory power networks

    NASA Astrophysics Data System (ADS)

    Yang, Li-xin; Jiang, Jun; Liu, Xiao-jun

    2016-12-01

    The influence of edge-adding number and edge-adding distance on synchronization of oscillatory power network is investigated. Here we study how the addition of new links impacts the emergence of synchrony in oscillatory power network, focusing on ring, and tree-like topologies. Numerical simulations show that the impact of distance of adding edges whether homogeneous (generators to generators or consumers to consumers) or heterogeneous (generators to consumer nodes and vice versa) edges is not obvious on the synchronizability of oscillatory power network. However, for the edge-adding number, it is observed that the bigger heterogeneous edge-adding number, the stronger synchronizability of power network will be. Furthermore, the homogeneous edge-adding number does not affect the synchronizability of oscillatory power network.

  16. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  17. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  18. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  19. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  20. Utilizing a Collaborative Cross Number Puzzle Game to Develop the Computing Ability of Addition and Subtraction

    ERIC Educational Resources Information Center

    Chen, Yen-Hua; Looi, Chee-Kit; Lin, Chiu-Pin; Shao, Yin-Juan; Chan, Tak-Wai

    2012-01-01

    While addition and subtraction is a key mathematical skill for young children, a typical activity for them in classrooms involves doing repetitive arithmetic calculation exercises. In this study, we explore a collaborative way for students to learn these skills in a technology-enabled way with wireless computers. Two classes, comprising a total of…

  1. Graphene as conductive additives in binderless activated carbon electrodes for power enhancement of supercapacitor

    NASA Astrophysics Data System (ADS)

    Nor, N. S. M.; Deraman, M.; Suleman, M.; Norizam, M. D. M.; Basri, N. H.; Sazali, N. E. S.; Hamdan, E.; Hanappi, M. F. Y. M.; Tajuddin, N. S. M.; Othman, M. A. R.; Shamsudin, S. A.; Omar, R.

    2016-11-01

    Carbon based supercapacitor electrodes from composite of binderless activated carbon and graphene as a conductive additive were fabricated with various amount of graphene (0, 2, 4, 6, 8 and 10 wt%). Graphene was mixed in self-adhesive carbon grains produced from pre-carbonized powder derived from fibers of oil palm empty fruit bunches and converted into green monoliths (GMs). The GMs were carbonized (N2) and activated (CO2) to produce activated carbon monoliths (ACMs) electrodes. Porosity characterizations by nitrogen adsorption-desorption isotherm method shows that the pore characteristics of the ACMs are influenced by the graphene additive. The results of galvanostatic charge-discharge tests carried out on the supercapacitor cells fabricated using these electrodes shows that the addition of graphene additive (even in small amount) decreases the equivalent series resistance and enhances the specific power of the cells but significantly lowers the specific capacitance. The supercapacitor cell constructed with the electrode containing 4 wt % of graphene offers the maximum power (175 W kg-1) which corresponds to an improvement of 55%. These results demonstrate that the addition of graphene as conductive additive in activated carbon electrodes can enhance the specific power of the supercapacitor.

  2. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  3. High-power laser arrays for optical computing

    NASA Astrophysics Data System (ADS)

    Zucker, Erik P.; Craig, Richard R.; Mehuys, David G.; Nam, Derek W.; Welch, David F.; Scifres, Donald R.

    1991-12-01

    We demonstrate both common electrode and addressable arrays of single mode semiconductor lasers suitable for optical computing and optical data storage. In the common electrode geometry, eight lasers have been fabricated on a single chip which show excellent spectral and power uniformity. Total optical power obtained from this array has been in excess of 1.2 Watts CW. We have also fabricated two and nine element monolithic, individually addressable arrays with emitter spacings between 10 jim and 150 p m. Separately addressed, each element emits in a single spatial mode to greater than 0.1 Watts. For the nine element array, uniformity of better than 1.0 nanometer in wavelength and 1 milliamp in operating current across the array has been obtained. Results on crosstalk and reliability of the arrays are presented.

  4. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Adel Sarofim; Bene Risio

    2002-07-28

    This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No.: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of the IGCC workbench. A series of parametric CFD simulations for single stage and two stage generic gasifier configurations have been performed. An advanced flowing slag model has been implemented into the CFD based gasifier model. A literature review has been performed on published gasification kinetics. Reactor models have been developed and implemented into the workbench for the majority of the heat exchangers, gas clean up system and power generation system for the Vision 21 reference configuration. Modifications to the software infrastructure of the workbench have been commenced to allow interfacing to the workbench reactor models that utilize the CAPE{_}Open software interface protocol.

  5. Additivity property and emergence of power laws in nonequilibrium steady states.

    PubMed

    Das, Arghya; Chatterjee, Sayani; Pradhan, Punyabrata; Mohanty, P K

    2015-11-01

    We show that an equilibriumlike additivity property can remarkably lead to power-law distributions observed frequently in a wide class of out-of-equilibrium systems. The additivity property can determine the full scaling form of the distribution functions and the associated exponents. The asymptotic behavior of these distributions is solely governed by branch-cut singularity in the variance of subsystem mass. To substantiate these claims, we explicitly calculate, using the additivity property, subsystem mass distributions in a wide class of previously studied mass aggregation models as well as in their variants. These results could help in the thermodynamic characterization of nonequilibrium critical phenomena.

  6. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  7. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  8. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  9. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  10. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-04-25

    This is the tenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two gasifier types. An improved process model for simulating entrained flow gasifiers has been implemented into the workbench. Model development has focused on: a pre-processor module to compute global gasification parameters from standard fuel properties and intrinsic rate information; a membrane based water gas shift; and reactors to oxidize fuel cell exhaust gas. The data visualization capabilities of the workbench have been extended by implementing the VTK visualization software that supports advanced visualization methods, including inexpensive Virtual Reality techniques. The ease-of-use, functionality and plug-and-play features of the workbench were highlighted through demonstrations of the workbench at a DOE sponsored coal utilization conference. A white paper has been completed that contains recommendations on the use of component architectures, model interface protocols and software frameworks for developing a Vision 21 plant simulator.

  11. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  12. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  13. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  14. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  15. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply...

  16. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  17. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  18. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to...

  19. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  20. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  1. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  2. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  3. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  4. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines

    PubMed Central

    Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca

    2013-01-01

    Summary The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol−1 and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG ‡ and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  5. Bragg's rule of stopping power additivity: a compilation and summary of results

    SciTech Connect

    Thwaites, D.I.

    1983-09-01

    Stopping power additivity, as expressed by Bragg's rule, is an important concept in many practical situations involving charged particles. Its validity has been investigated in a large number of studies and the wide range of data is confusing and at times conflicting. No previous comprehensive survey of the data has been undertaken. Thus a compilation is attempted here of a hundred or so papers which have included tests of Bragg's rule. Their main results are indicated and a summary is given of the effects of chemical binding and phase on the stopping power of heavy charged particles. Such effects are confirmed on the evidence available. Chemical binding effects become more significant for materials containing low-Z constituents and as energy falls into and through the transition region. Deviations of up to 50% have been observed in atomic stopping cross sections extracted from measurements on hydrocarbons. There is still some conflicting evidence appearing on phase effects. However, in general a broad consensus is emerging indicating significant differences in H/sub 2/O and organic and similar materials. Stopping cross sections in the vapor phase are greater by up to approx. 5 or 10% at energies around those of the stopping power maximum for protons and He ions. The effects decrease as energy increases.

  6. Thermoelectric Power Generation from Lanthanum Strontium Titanium Oxide at Room Temperature through the Addition of Graphene.

    PubMed

    Lin, Yue; Norman, Colin; Srivastava, Deepanshu; Azough, Feridoon; Wang, Li; Robbins, Mark; Simpson, Kevin; Freer, Robert; Kinloch, Ian A

    2015-07-29

    The applications of strontium titanium oxide based thermoelectric materials are currently limited by their high operating temperatures of >700 °C. Herein, we show that the thermal operating window of lanthanum strontium titanium oxide (LSTO) can be reduced to room temperature by the addition of a small amount of graphene. This increase in operating performance will enable future applications such as generators in vehicles and other sectors. The LSTO composites incorporated one percent or less of graphene and were sintered under an argon/hydrogen atmosphere. The resultant materials were reduced and possessed a multiphase structure with nanosized grains. The thermal conductivity of the nanocomposites decreased upon the addition of graphene, whereas the electrical conductivity and power factor both increased significantly. These factors, together with a moderate Seebeck coefficient, meant that a high power factor of ∼2500 μWm(-1)K(-2) was reached at room temperature at a loading of 0.6 wt % graphene. The highest thermoelectric figure of merit (ZT) was achieved when 0.6 wt % graphene was added (ZT = 0.42 at room temperature and 0.36 at 750 °C), with >280% enhancement compared to that of pure LSTO. A preliminary 7-couple device was produced using bismuth strontium cobalt oxide/graphene-LSTO pucks. This device had a Seebeck coefficient of ∼1500 μV/K and an open voltage of 600 mV at a mean temperature of 219 °C.

  7. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  8. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-04-30

    This is the sixth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of our IGCC workbench. Preliminary CFD simulations for single stage and two stage ''generic'' gasifiers using firing conditions based on the Vision 21 reference configuration have been performed. Work is continuing on implementing an advanced slagging model into the CFD based gasifier model. An investigation into published gasification kinetics has highlighted a wide variance in predicted performance due to the choice of kinetic parameters. A plan has been outlined for developing the reactor models required to simulate the heat transfer and gas clean up equipment downstream of the gasifier. Three models that utilize the CCA software protocol have been integrated into a version of the IGCC workbench. Tests of a CCA implementation of our CFD code into the workbench demonstrated that the CCA CFD module can execute on a geographically remote PC (linked via the Internet) in a manner that is transparent to the user. Software tools to create ''walk-through'' visualizations of the flow field within a gasifier have been demonstrated.

  9. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-31

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three.

  10. PAH growth initiated by propargyl addition: mechanism development and computational kinetics.

    PubMed

    Raj, Abhijeet; Al Rashidi, Mariam J; Chung, Suk Ho; Sarathy, S Mani

    2014-04-24

    Polycyclic aromatic hydrocarbon (PAH) growth is known to be the principal pathway to soot formation during fuel combustion, as such, a physical understanding of the PAH growth mechanism is needed to effectively assess, predict, and control soot formation in flames. Although the hydrogen abstraction C2H2 addition (HACA) mechanism is believed to be the main contributor to PAH growth, it has been shown to under-predict some of the experimental data on PAHs and soot concentrations in flames. This article presents a submechanism of PAH growth that is initiated by propargyl (C3H3) addition onto naphthalene (A2) and the naphthyl radical. C3H3 has been chosen since it is known to be a precursor of benzene in combustion and has appreciable concentrations in flames. This mechanism has been developed up to the formation of pyrene (A4), and the temperature-dependent kinetics of each elementary reaction has been determined using density functional theory (DFT) computations at the B3LYP/6-311++G(d,p) level of theory and transition state theory (TST). H-abstraction, H-addition, H-migration, β-scission, and intramolecular addition reactions have been taken into account. The energy barriers of the two main pathways (H-abstraction and H-addition) were found to be relatively small if not negative, whereas the energy barriers of the other pathways were in the range of (6-89 kcal·mol(-1)). The rates reported in this study may be extrapolated to larger PAH molecules that have a zigzag site similar to that in naphthalene, and the mechanism presented herein may be used as a complement to the HACA mechanism to improve prediction of PAH and soot formation.

  11. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  12. Complex additive systems for Mn-Zn ferrites with low power loss

    SciTech Connect

    Töpfer, J. Angermann, A.

    2015-05-07

    Mn-Zn ferrites were prepared via an oxalate-based wet-chemical synthesis process. Nanocrystalline ferrite powders with particle size of 50 nm were sintered at 1150 °C with 500 ppm CaO and 100 ppm SiO{sub 2} as standard additives. A fine-grained, dense microstructure with grain size of 4–5 μm was obtained. Simultaneous addition of Nb{sub 2}O{sub 5}, ZrO{sub 2}, V{sub 2}O{sub 5}, and SnO{sub 2} results low power losses, e.g., 65 mW/cm{sup 3} (500 kHz, 50 mT, 80 °C) and 55 mW/cm{sup 3} (1 MHz, 25 mT, 80 °C). Loss analysis shows that eddy current and residual losses were minimized through formation of insulating grain boundary phases, which is confirmed by transmission electron microscopy. Addition of SnO{sub 2} increases the ferrous ion concentration and affects anisotropy as reflected in permeability measurements μ(T)

  13. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  14. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  15. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  16. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Khairallah, S. A.; Kamath, C.; Rubenchik, A. M.

    2015-12-15

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  17. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGES

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; ...

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  18. Power conversion efficiency enhancement in OPV devices using spin 1/2 molecular additives

    NASA Astrophysics Data System (ADS)

    Basel, Tek; Vardeny, Valy; Yu, Luping

    2014-03-01

    We investigated the power conversion efficiency of bulk heterojunction OPV cells based on the low bandgap polymer PTB7, blend with C61-PCBM. We also employed the technique of photo-induced absorption, PA; electrical and magneto-PA (MPA) techniques to understand the details of the photocurrent generation process in this blend. We found that spin 1/2 molecular additives, such as Galvinoxyl (Gxl) radicals dramatically enhance the cell efficiency; we obtained 20% increase in photocurrent upon Gxl doping with 2% weight. We explain our finding by the ability of the spin 1/2 radicals to interfere with the known major loss mechanism in the cell due to recombination of charge transfer exciton at the D-A interface via triplet excitons in the polymer donors. Supported by National Science Foundation-Material Science & Engineering Center (NSF-MRSEC), University of Utah.

  19. Computation of octanol-water partition coefficients by guiding an additive model with knowledge.

    PubMed

    Cheng, Tiejun; Zhao, Yuan; Li, Xun; Lin, Fu; Xu, Yong; Zhang, Xinglong; Li, Yan; Wang, Renxiao; Lai, Luhua

    2007-01-01

    We have developed a new method, i.e., XLOGP3, for logP computation. XLOGP3 predicts the logP value of a query compound by using the known logP value of a reference compound as a starting point. The difference in the logP values of the query compound and the reference compound is then estimated by an additive model. The additive model implemented in XLOGP3 uses a total of 87 atom/group types and two correction factors as descriptors. It is calibrated on a training set of 8199 organic compounds with reliable logP data through a multivariate linear regression analysis. For a given query compound, the compound showing the highest structural similarity in the training set will be selected as the reference compound. Structural similarity is quantified based on topological torsion descriptors. XLOGP3 has been tested along with its predecessor, i.e., XLOGP2, as well as several popular logP methods on two independent test sets: one contains 406 small-molecule drugs approved by the FDA and the other contains 219 oligopeptides. On both test sets, XLOGP3 produces more accurate predictions than most of the other methods with average unsigned errors of 0.24-0.51 units. Compared to conventional additive methods, XLOGP3 does not rely on an extensive classification of fragments and correction factors in order to improve accuracy. It is also able to utilize the ever-increasing experimentally measured logP data more effectively.

  20. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false CPU boards and power supplies used in personal computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers....

  1. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false CPU boards and power supplies used in personal computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers....

  2. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false CPU boards and power supplies used in personal computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers....

  3. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  4. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  5. Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.

    PubMed

    Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E

    2013-05-01

    Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.

  6. Task scheduling for high performance low power embedded computing

    NASA Astrophysics Data System (ADS)

    Deniziak, Stanislaw; Dzitkowski, Albert

    2016-12-01

    In this paper we present a method of task scheduling for low-power real-time embedded systems. We assume that the system is specified as a task graph, then it is implemented using multi-core embedded processor with low-power processing capabilities. We propose a new scheduling method to create the optimal schedule. The goal of optimization is to minimize the power consumption while all time constraints will be satisfied. We present experimental results, obtained for some standard benchmarks, showing advantages of our method.

  7. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  8. The computational power of time dilation in special relativity

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob

    2014-03-01

    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic Grover speedup from quantum computing and an n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation. Parts of this talk are based on [J.Phys.Conf.Ser. 229:012020 (2010), arXiv:0907.1579]. Support is acknowledged from the Foundational Questions Institute (FQXi) and the Compagnia di San Paolo Foundation.

  9. Computation of electric power production cost with transmission contraints

    NASA Astrophysics Data System (ADS)

    Earle, Robert Leonard

    The production cost in operating an electric power system is the cost of generation to meet the customer load or demand. Production costing models are used in analysis of electric power systems to estimate this cost for various purposes such as evaluating long term investments in generating capacity, contracts for sales, purchases, or trades of power. A multi-area production costing model includes the effects of transmission constraints in calculating costs. Including transmission constraints in production costing models is important because the electric power industry is interconnected and trades or sales of power amongst systems can lower costs. This thesis develops an analytical model for multi-area production costing. The advantage of this approach is that it explicitly examines the underlying structure of the problem. The major contributions of our research are as follows. First, we develop the multivariate model not just for transportation type models of electric power network flows, but also for the direct current power flow model. Second, this thesis derives the multi-area production cost curve in the general case. This new result gives a simple formula for determination of system cost and the gradient of cost with respect to transmission capacities. Third, we give an algorithm for generating the non-redundant constraints from a Gale-Hoffman type region. The Gale-Hoffman conditions characterize feasibility of flow in a network. We also gather together some existing and new results on Gale-Hoffman regions and put them in a unified framework. Fourth, in order to derive the multi-area production cost curves and also to perform the integration of the multivariate Edgeworth series, we need wedge shaped regions (a wedge is the affine image of an orthant). We give an algorithm for decomposing any polyhedral set into wedges. Fifth, this thesis gives a new method for one dimensional numerical integration of the trivariate normal. The best methods previously known

  10. Computational tool for simulation of power and refrigeration cycles

    NASA Astrophysics Data System (ADS)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  11. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    SciTech Connect

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documents have also been provided to KHNP-CRI.

  12. Powering Down from the Bottom up: Greener Client Computing

    ERIC Educational Resources Information Center

    O'Donnell, Tom

    2009-01-01

    A decade ago, people wanting to practice "green computing" recycled their printer paper, turned their personal desktop systems off from time to time, and tried their best to donate old equipment to a nonprofit instead of throwing it away. A campus IT department can shave a few watts off just about any IT process--the real trick is planning and…

  13. Computer simulation of the scaled power bipolar SHF transistor structures

    NASA Astrophysics Data System (ADS)

    Nelayev, V. V.; Efremov, V. A.; Snitovsky, Yu. P.

    2007-04-01

    New advanced technology for creation of the npn power silicon bipolar SHF transistor structure is proposed. Preferences of the advanced technology in comparison with standard technology are demonstrated. Simulation of both technology flows was performed with emphasis on scaling of the discussed device structure.

  14. Addition of flexible body option to the TOLA computer program. Part 2: User and programmer documentation

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    User and programmer oriented documentation for the flexible body option of the Takeoff and Landing Analysis (TOLA) computer program are provided. The user information provides sufficient knowledge of the development and use of the option to enable the engineering user to successfully operate the modified program and understand the results. The programmer's information describes the option structure and logic enabling a programmer to make major revisions to this part of the TOLA computer program.

  15. Computer program for afterheat temperature distribution for mobile nuclear power plant

    NASA Technical Reports Server (NTRS)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  16. Harmonic Resonance in Power Transmission Systems due to the Addition of Shunt Capacitors

    NASA Astrophysics Data System (ADS)

    Patil, Hardik U.

    Shunt capacitors are often added in transmission networks at suitable locations to improve the voltage profile. In this thesis, the transmission system in Arizona is considered as a test bed. Many shunt capacitors already exist in the Arizona transmission system and more are planned to be added. Addition of these shunt capacitors may create resonance conditions in response to harmonic voltages and currents. Such resonance, if it occurs, may create problematic issues in the system. It is main objective of this thesis to identify potential problematic effects that could occur after placing new shunt capacitors at selected buses in the Arizona network. Part of the objective is to create a systematic plan for avoidance of resonance issues. For this study, a method of capacitance scan is proposed. The bus admittance matrix is used as a model of the networked transmission system. The calculations on the admittance matrix were done using Matlab. The test bed is the actual transmission system in Arizona; however, for proprietary reasons, bus names are masked in the thesis copy intended for the public domain. The admittance matrix was obtained from data using the PowerWorld Simulator after equivalencing the 2016 summer peak load (planning case). The full Western Electricity Coordinating Council (WECC) system data were used. The equivalencing procedure retains only the Arizona portion of the WECC. The capacitor scan results for single capacitor placement and multiple capacitor placement cases are presented. Problematic cases are identified in the form of 'forbidden response. The harmonic voltage impact of known sources of harmonics, mainly large scale HVDC sources, is also presented. Specific key results for the study indicated include: (1) The forbidden zones obtained as per the IEEE 519 standard indicates the bus 10 to be the most problematic bus. (2) The forbidden zones also indicate that switching values for the switched shunt capacitor (if used) at bus 3 should be

  17. Banquet Talk: Area-Time-Power Tradeoffs in Computer Design: The Road Ahead

    DTIC Science & Technology

    2007-11-02

    M. J. Flynn 1 HPEC ‘04 Banquet Talk: Area- Time -Power tradeoffs in computer design: the road ahead Michael J. Flynn Report Documentation...the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Banquet Talk: Area- Time -Power tradeoffs in computer design: the road ahead 5a. CONTRACT

  18. Integrated Management of Power Aware Computation and Communication (IMPACCT)

    DTIC Science & Technology

    2003-05-01

    151 6.15 Power modes of the Ethernet interface. . . . . . . . . . . . . . . . . . 151 6.16 Analytical results...with a multi-speed Ethernet . 123 6.1 Introduction Towards High-Speed Serial Busses on SoC A key trend in systems-on-chip is towards the use of high-speed...loosely coupled systems. High-speed serial con- trollers such as Ethernet are now an integral part of many embedded processors. Serial busses also have

  19. Dynamic Computer Model of a Stirling Space Nuclear Power System

    DTIC Science & Technology

    2006-05-04

    I would like to thank all Naval Academy faculty involved with the Trident Scholar program . The committee readers (Professors Cerza, Nakos, and...obstacles in structural integrity , stowing for launch, deployment in orbit, and sun pointing that are far from being solved with current technology. 6...the Systems for Nuclear Auxiliary Power (SNAP) program . This program resulted in the only reactor flown in space by the United States. Also, Russia

  20. Energy Use and Power Levels in New Monitors and Personal Computers

    SciTech Connect

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  1. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  2. Computer simulation of effect of conditions on discharge-excited high power gas flow CO laser

    NASA Astrophysics Data System (ADS)

    Ochiai, Ryo; Iyoda, Mitsuhiro; Taniwaki, Manabu; Sato, Shunichi

    2017-01-01

    The authors have developed the computer simulation codes to analyze the effect of conditions on the performances of discharge excited high power gas flow CO laser. The six be analyzed. The simulation code described and executed by Macintosh computers consists of some modules to calculate the kinetic processes. The detailed conditions, kinetic processes, results and discussions are described in this paper below.

  3. Computer simulation of magnetization-controlled shunt reactors for calculating electromagnetic transients in power systems

    SciTech Connect

    Karpov, A. S.

    2013-01-15

    A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.

  4. Optimization of fluid line sizes with pumping power penalty IBM-360 computer program

    NASA Technical Reports Server (NTRS)

    Jelinek, D.

    1972-01-01

    Computer program has been developed to calculate and total weights for tubing, fluid in tubing, and weight of fuel cell power source necessary to power pump based on flow rate and pressure drop. Program can be used for fluid systems used in any type of aircraft, spacecraft, trucks, ships, refineries, and chemical processing plants.

  5. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  6. CIDER: Enabling Robustness-Power Tradeoffs on a Computational Eyeglass

    PubMed Central

    Mayberry, Addison; Tun, Yamin; Hu, Pan; Smith-Freedman, Duncan; Ganesan, Deepak; Marlin, Benjamin; Salthouse, Christopher

    2016-01-01

    The human eye offers a fascinating window into an individual’s health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. The challenges lie in: a) handling the complexity of continuous high-rate sensing from a camera and processing the image stream to estimate eye parameters, and b) dealing with the wide variability in illumination conditions in the natural environment. This paper explores the power–robustness tradeoffs inherent in the design of a wearable eye tracker, and proposes a novel staged architecture that enables graceful adaptation across the spectrum of real-world illumination. We propose CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared, b) error in estimating pupil center and pupil dilation, and c) model training procedures that involve zero effort from a user. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz. PMID:27042165

  7. Stack and dump: Peak-power scaling by coherent pulse addition in passive cavities

    NASA Astrophysics Data System (ADS)

    Breitkopf, S.; Eidam, T.; Klenke, A.; Carstens, H.; Holzberger, S.; Fill, E.; Schreiber, T.; Krausz, F.; Tünnermann, A.; Pupeza, I.; Limpert, J.

    2015-10-01

    During the last decades femtosecond lasers have proven their vast benefit in both scientific and technological tasks. Nevertheless, one laser feature bearing the tremendous potential for high-field applications, delivering extremely high peak and average powers simultaneously, is still not accessible. This is the performance regime several upcoming applications such as laser particle acceleration require, and therefore, challenge laser technology to the fullest. On the one hand, some state-of-the-art canonical bulk amplifier systems provide pulse peak powers in the range of multi-terawatt to petawatt. On the other hand, concepts for advanced solid-state-lasers, specifically thin disk, slab or fiber systems have shown their capability of emitting high average powers in the kilowatt range with a high wall-plug-efficiency while maintaining an excellent spatial and temporal quality of the output beam. In this article, a brief introduction to a concept for a compact laser system capable of simultaneously providing high peak and average powers all along with a high wall-plug efficiency will be given. The concept relies on the stacking of a pulse train emitted from a high-repetitive femtosecond laser system in a passive enhancement cavity, also referred to as temporal coherent combining. In this manner, the repetition rate is decreased in favor of a pulse energy enhancement by the same factor while the average power is almost preserved. The key challenge of this concept is a fast, purely reflective switching element that allows for the dumping of the enhanced pulse out of the cavity. Addressing this challenge could, for the first time, allow for the highly efficient extraction of joule-class pulses at megawatt average power levels and thus lead to a whole new area of applications for ultra-fast laser systems.

  8. Negative capacitance for ultra-low power computing

    NASA Astrophysics Data System (ADS)

    Khan, Asif Islam

    Owing to the fundamental physics of the Boltzmann distribution, the ever-increasing power dissipation in nanoscale transistors threatens an end to the almost-four-decade-old cadence of continued performance improvement in complementary metal-oxide-semiconductor (CMOS) technology. It is now agreed that the introduction of new physics into the operation of field-effect transistors---in other words, "reinventing the transistor'"--- is required to avert such a bottleneck. In this dissertation, we present the experimental demonstration of a novel physical phenomenon, called the negative capacitance effect in ferroelectric oxides, which could dramatically reduce power dissipation in nanoscale transistors. It was theoretically proposed in 2008 that by introducing a ferroelectric negative capacitance material into the gate oxide of a metal-oxide-semiconductor field-effect transistor (MOSFET), the subthreshold slope could be reduced below the fundamental Boltzmann limit of 60 mV/dec, which, in turn, could arbitrarily lower the power supply voltage and the power dissipation. The research presented in this dissertation establishes the theoretical concept of ferroelectric negative capacitance as an experimentally verified fact. The main results presented in this dissertation are threefold. To start, we present the first direct measurement of negative capacitance in isolated, single crystalline, epitaxially grown thin film capacitors of ferroelectric Pb(Zr0.2Ti0.8)O3. By constructing a simple resistor-ferroelectric capacitor series circuit, we show that, during ferroelectric switching, the ferroelectric voltage decreases, while the stored charge in it increases, which directly shows a negative slope in the charge-voltage characteristics of a ferroelectric capacitor. Such a situation is completely opposite to what would be observed in a regular resistor-positive capacitor series circuit. This measurement could serve as a canonical test for negative capacitance in any novel

  9. Can Computer-Assisted Discovery Learning Foster First Graders' Fluency with the Most Basic Addition Combinations?

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Eiland, Michael D.; Purpura, David J.; Reid, Erin E.

    2013-01-01

    In a 9-month training experiment, 64 first graders with a risk factor were randomly assigned to computer-assisted structured discovery of the add-1 rule (e.g., the sum of 7 + 1 is the number after "seven" when we count), unstructured discovery learning of this regularity, or an active-control group. Planned contrasts revealed that the…

  10. PowerPoint Presentations: A Creative Addition to the Research Process.

    ERIC Educational Resources Information Center

    Perry, Alan E.

    2003-01-01

    Contends that the requirement of a PowerPoint presentation as part of the research process would benefit students in the following ways: learning how to conduct research; starting their research project sooner; honing presentation and public speaking skills; improving cooperative and social skills; and enhancing technology skills. Outlines the…

  11. Power Computations in Time Series Analyses for Traffic Safety Interventions

    PubMed Central

    McLeod, A. Ian; Vingilis, E. R.

    2008-01-01

    The evaluation of traffic safety interventions or other policies that can affect road safety often requires the collection of administrative time series data, such as monthly motor vehicle collision data that may be difficult and/or expensive to collect. Furthermore, since policy decisions may be based on the results found from the intervention analysis of the policy, it is important to ensure that the statistical tests have enough power, that is, that we have collected enough time series data both before and after the intervention so that a meaningful change in the series will likely be detected. In this short paper we present a simple methodology for doing this. It is expected that the methodology presented will be useful for sample size determination in a wide variety of traffic safety intervention analysis applications. Our method is illustrated with a proposed traffic safety study that was funded by NIH. PMID:18460394

  12. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  13. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  14. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  15. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  16. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  17. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Adjustments for Additions and Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App....

  18. The effectiveness of power-generating complexes constructed on the basis of nuclear power plants combined with additional sources of energy determined taking risk factors into account

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.

    2015-02-01

    The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.

  19. Improving Performance of Power Systems with Large-scale Variable Generation Additions

    SciTech Connect

    Makarov, Yuri V.; Etingov, Pavel V.; Samaan, Nader A.; Lu, Ning; Ma, Jian; Subbarao, Krishnappa; Du, Pengwei; Kannberg, Landis D.

    2012-07-22

    A power system with large-scale renewable resources, like wind and solar generation, creates significant challenges to system control performance and reliability characteristics because of intermittency and uncertainties associated with variable generation. It is important to quantify these uncertainties, and then incorporate this information into decision-making processes and power system operations. This paper presents three approaches to evaluate the flexibility needed from conventional generators and other resources in the presence of variable generation as well as provide this flexibility from a non-traditional resource – wide area energy storage system. These approaches provide operators with much-needed information on the likelihood and magnitude of ramping and capacity problems, and the ability to dispatch available resources in response to such problems.

  20. Coherent addition of high power laser diode array with a V-shape external Talbot cavity.

    PubMed

    Liu, B; Liu, Y; Braiman, Y

    2008-12-08

    We designed a V-shape external Talbot cavity for a broad-area laser diode array and demonstrated coherent laser beam combining at high power with narrow spectral linewidth. The V-shape external Talbot cavity provides good mode-discrimination and does not require a spatial filter. A multi-lobe far-field profile generated by a low filling-factor phase-locked array is confirmed by our numerical simulation.

  1. Biologically relevant molecular transducer with increased computing power and iterative abilities.

    PubMed

    Ratner, Tamar; Piran, Ron; Jonoska, Natasha; Keinan, Ehud

    2013-05-23

    As computing devices, which process data and interconvert information, transducers can encode new information and use their output for subsequent computing, offering high computational power that may be equivalent to a universal Turing machine. We report on an experimental DNA-based molecular transducer that computes iteratively and produces biologically relevant outputs. As a proof of concept, the transducer accomplished division of numbers by 3. The iterative power was demonstrated by a recursive application on an obtained output. This device reads plasmids as input and processes the information according to a predetermined algorithm, which is represented by molecular software. The device writes new information on the plasmid using hardware that comprises DNA-manipulating enzymes. The computation produces dual output: a quotient, represented by newly encoded DNA, and a remainder, represented by E. coli phenotypes. This device algorithmically manipulates genetic codes.

  2. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  3. On the Computational Power of Spiking Neural P Systems with Self-Organization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  4. Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Harder, R. L.

    1973-01-01

    A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.

  5. Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry Algorithms Library (CGAL)

    DTIC Science & Technology

    2015-06-01

    that a structure is built up by layers. Typically, additive manufacturing devices (3-dimensional [3-D] printers , e.g.), use the stereolithography (STL...begin with a standard, voxel-based topology optimization scheme and end with an STL file, ready for use in a 3-D printer or other additive manufacturing...S, Yvinec M. Cgal 4.6 - 3d alpha shapes. 2015 [accessed 2015 May 18]. http://doc.cgal.org/latest/Alpha_shapes_3/index.html#Chapter_3D_ Alpha_Shapes

  6. Lake Erie water level study. Appendix E. Power. Annex D. Computer programs. Final report

    SciTech Connect

    Not Available

    1981-07-01

    This Annex is part of Appendix E - Power. Appendix E contains the economic evaluation of Lake Erie regulation plans in terms of their effects on the generation of hydroelectric power on the connecting channels of the Great Lakes and on the St. Lawrence River. It also contains a description of the methodology that was developed for the purpose of carrying out this evaluation. The purpose of Annex D is to document the computer programs that were used for the determination of power output at each of the power plants. The documentation also provides sufficient user instructions to permit the economic evaluation results to be readily reproducible.

  7. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  8. Evaluation of Different Power of Near Addition in Two Different Multifocal Intraocular Lenses

    PubMed Central

    Unsal, Ugur; Baser, Gonen

    2016-01-01

    Purpose. To compare near, intermediate, and distance vision and quality of vision, when refractive rotational multifocal intraocular lenses with 3.0 diopters or diffractive multifocal intraocular lenses with 2.5 diopters near addition are implanted. Methods. 41 eyes of 41 patients in whom rotational +3.0 diopters near addition IOLs were implanted and 30 eyes of 30 patients in whom diffractive +2.5 diopters near addition IOLs were implanted after cataract surgery were reviewed. Uncorrected and corrected distance visual acuity, intermediate visual acuity, near visual acuity, and patient satisfaction were evaluated 6 months later. Results. The corrected and uncorrected distance visual acuity were the same between both groups (p = 0.50 and p = 0.509, resp.). The uncorrected intermediate and corrected intermediate and near vision acuities were better in the +2.5 near vision added intraocular lens implanted group (p = 0.049, p = 0.005, and p = 0.001, resp.) and the uncorrected near vision acuity was better in the +3.0 near vision added intraocular lens implanted group (p = 0.001). The patient satisfactions of both groups were similar. Conclusion. The +2.5 diopters near addition could be a better choice in younger patients with more distance and intermediate visual requirements (driving, outdoor activities), whereas the + 3.0 diopters should be considered for patients with more near vision correction (reading). PMID:27340560

  9. Addition of visual noise boosts evoked potential-based brain-computer interface

    PubMed Central

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-01-01

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7–36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs. PMID:24828128

  10. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  11. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  12. Nonlinear dynamics of high-power ultrashort laser pulses: exaflop computations on a laboratory computer station and subcycle light bullets

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Zheltikov, A. M.

    2016-09-01

    The propagation of high-power ultrashort light pulses involves intricate nonlinear spatio-temporal dynamics where various spectral-temporal field transformation effects are strongly coupled to the beam dynamics, which, in turn, varies from the leading to the trailing edge of the pulse. Analysis of this nonlinear dynamics, accompanied by spatial instabilities, beam breakup into multiple filaments, and unique phenomena leading to the generation of extremely short optical field waveforms, is equivalent in its computational complexity to a simulation of the time evolution of a few billion-dimensional physical system. Such an analysis requires exaflops of computational operations and is usually performed on high-performance supercomputers. Here, we present methods of physical modeling and numerical analysis that allow problems of this class to be solved on a laboratory computer boosted by a cluster of graphic accelerators. Exaflop computations performed with the application of these methods reveal new unique phenomena in the spatio-temporal dynamics of high-power ultrashort laser pulses. We demonstrate that unprecedentedly short light bullets can be generated as a part of that dynamics, providing optical field localization in both space and time through a delicate balance between dispersion and nonlinearity with simultaneous suppression of diffraction-induced beam divergence due to the joint effect of Kerr and ionization nonlinearities.

  13. Power levels in office equipment: Measurements of new monitors and personal computers

    SciTech Connect

    Roberson, Judy A.; Brown, Richard E.; Nordman, Bruce; Webber, Carrie A.; Homan, Gregory H.; Mahajan, Akshay; McWhinney, Marla; Koomey, Jonathan G.

    2002-05-14

    Electronic office equipment has proliferated rapidly over the last twenty years and is projected to continue growing in the future. Efforts to reduce the growth in office equipment energy use have focused on power management to reduce power consumption of electronic devices when not being used for their primary purpose. The EPA ENERGY STAR[registered trademark] program has been instrumental in gaining widespread support for power management in office equipment, and accurate information about the energy used by office equipment in all power levels is important to improving program design and evaluation. This paper presents the results of a field study conducted during 2001 to measure the power levels of new monitors and personal computers. We measured off, on, and low-power levels in about 60 units manufactured since July 2000. The paper summarizes power data collected, explores differences within the sample (e.g., between CRT and LCD monitors), and discusses some issues that arise in m etering office equipment. We also present conclusions to help improve the success of future power management programs.Our findings include a trend among monitor manufacturers to provide a single very low low-power level, and the need to standardize methods for measuring monitor on power, to more accurately estimate the annual energy consumption of office equipment, as well as actual and potential energy savings from power management.

  14. 78 FR 47805 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ... COMMISSION Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This..., ``Maintenance and Inspection of Records.'' This RG is one of six RG revisions addressing computer...

  15. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  16. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  17. Enantioselective conjugate addition of nitro compounds to α,β-unsaturated ketones: an experimental and computational study.

    PubMed

    Manzano, Rubén; Andrés, José M; Álvarez, Rosana; Muruzábal, María D; de Lera, Ángel R; Pedrosa, Rafael

    2011-05-16

    A series of chiral thioureas derived from easily available diamines, prepared from α-amino acids, have been tested as catalysts in the enantioselective Michael additions of nitroalkanes to α,β-unsaturated ketones. The best results are obtained with the bifunctional catalyst prepared from L-valine. This thiourea promotes the reaction with high enantioselectivities and chemical yields for aryl/vinyl ketones, but the enantiomeric ratio for alkyl/vinyl derivatives is very modest. The addition of substituted nitromethanes led to the corresponding adducts with excellent enantioselectivity but very poor diastereoselectivity. Evidence for the isomerization of the addition products has been obtained from the reaction of chalcone with [D(3)]nitromethane, which shows that the final addition products epimerize under the reaction conditions. The epimerization explains the low diastereoselectivity observed in the formation of adducts with two adjacent tertiary stereocenters. Density functional studies of the transition structures corresponding to two alternative activation modes of the nitroalkanes and α,β-unsaturated ketones by the bifunctional organocatalyst have been carried out at the B3LYP/3-21G* level. The computations are consistent with a reaction model involving the Michael addition of the thiourea-activated nitronate to the ketone activated by the protonated amine of the organocatalyst. The enantioselectivities predicted by the computations are consistent with the experimental values obtained for aryl- and alkyl-substituted α,β-unsaturated ketones.

  18. Computational Research Challenges and Opportunities for the Optimization of Fossil Energy Power Generation System

    SciTech Connect

    Zitney, S.E.

    2007-06-01

    Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.

  19. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  20. Definition and computation of intermolecular contact in liquids using additively weighted Voronoi tessellation.

    PubMed

    Isele-Holder, Rolf E; Rabideau, Brooks D; Ismail, Ahmed E

    2012-05-10

    We present a definition of intermolecular surface contact by applying weighted Voronoi tessellations to configurations of various organic liquids and water obtained from molecular dynamics simulations. This definition of surface contact is used to link the COSMO-RS model and molecular dynamics simulations. We demonstrate that additively weighted tessellation is the superior tessellation type to define intermolecular surface contact. Furthermore, we fit a set of weights for the elements C, H, O, N, F, and S for this tessellation type to obtain optimal agreement between the models. We use these radii to successfully predict contact statistics for compounds that were excluded from the fit and mixtures. The observed agreement between contact statistics from COSMO-RS and molecular dynamics simulations confirms the capability of the presented method to describe intermolecular contact. Furthermore, we observe that increasing polarity of the surfaces of the examined molecules leads to weaker agreement in the contact statistics. This is especially pronounced for pure water.

  1. Effect of ferrite addition above the base ferrite on the coupling factor of wireless power transfer for vehicle applications

    NASA Astrophysics Data System (ADS)

    Batra, T.; Schaltz, E.; Ahn, S.

    2015-05-01

    Power transfer capability of wireless power transfer systems is highly dependent on the magnetic design of the primary and secondary inductors and is measured quantitatively by the coupling factor. The inductors are designed by placing the coil over a ferrite base to increase the coupling factor and reduce magnetic emissions to the surroundings. Effect of adding extra ferrite above the base ferrite at different physical locations on the self-inductance, mutual inductance, and coupling factor is under investigation in this paper. The addition can increase or decrease the mutual inductance depending on the placement of ferrite. Also, the addition of ferrite increases the self-inductance of the coils, and there is a probability for an overall decrease in the coupling factor. Correct placement of ferrite, on the other hand, can increase the coupling factor relatively higher than the base ferrite as it is closer to the other inductor. Ferrite being a heavy compound of iron increases the inductor weight significantly and needs to be added judiciously. Four zones have been identified in the paper, which shows different sensitivity to addition of ferrite in terms of the two inductances and coupling factor. Simulation and measurement results are presented for different air gaps between the coils and at different gap distances between the ferrite base and added ferrite. This paper is beneficial in improving the coupling factor while adding minimum weight to wireless power transfer system.

  2. Thread selection according to power characteristics during context switching on compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Randles, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2016-10-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  3. Thread selection according to predefined power characteristics during context switching on compute nodes

    DOEpatents

    None, None

    2013-06-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  4. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  5. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes.

    PubMed

    Singh, Arvinder; Chandra, Amreesh

    2016-05-17

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs.

  6. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    NASA Astrophysics Data System (ADS)

    Singh, Arvinder; Chandra, Amreesh

    2016-05-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs.

  7. Power grid simulation applications developed using the GridPACK™ high performance computing framework

    SciTech Connect

    Jin, Shuangshuang; Chen, Yousu; Diao, Ruisheng; Huang, Zhenyu; Perkins, William; Palmer, Bruce

    2016-12-01

    This paper describes the GridPACK™ software framework for developing power grid simulations that can run on high performance computing platforms, with several example applications (dynamic simulation, static contingency analysis, and dynamic contingency analysis) that have been developed using GridPACK.

  8. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  9. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  10. Characterization of Steel-Ta Dissimilar Metal Builds Made Using Very High Power Ultrasonic Additive Manufacturing (VHP-UAM)

    NASA Astrophysics Data System (ADS)

    Sridharan, Niyanth; Norfolk, Mark; Babu, Sudarsanam Suresh

    2016-05-01

    Ultrasonic additive manufacturing is a solid-state additive manufacturing technique that utilizes ultrasonic vibrations to bond metal tapes into near net-shaped components. The major advantage of this process is the ability to manufacture layered structures with dissimilar materials without any intermetallic formation. Majority of the published literature had focused only on the bond formation mechanism in Aluminum alloys. The current work pertains to explain the microstructure evolution during dissimilar joining of iron and tantalum using very high power ultrasonic additive manufacturing and characterization of the interfaces using electron back-scattered diffraction and Nano-indentation measurement. The results showed extensive grain refinement at the bonded interfaces of these metals. This phenomenon was attributed to continuous dynamic recrystallization process driven by the high strain rate plastic deformation and associated adiabatic heating that is well below 50 pct of melting point of both iron and Ta.

  11. Methods for computing weighting tables based on local power expansion for tristimulus values computations.

    PubMed

    Li, Changjun; Oleari, Claudio; Melgosa, Manuel; Xu, Yang

    2011-11-01

    In this paper, two types of weighting tables are derived by applying the local power expansion method proposed by Oleari [Color Res. Appl. 25, 176 (2000)]. Both tables at two different levels consider the deconvolution of the spectrophotometric data for monochromator triangular transmittance. The first one, named zero-order weighting table, is similar to weighting table 5 of American Society for Testing and Materials (ASTM) used with the measured spectral reflectance factors (SRFs) corrected by the Stearns and Stearns formula. The second one, named second-order weighting table, is similar to weighting table 6 of ASTM and must be used with the undeconvoluted SRFs. It is hoped that the results of this paper will aid the International Commission on Illumination TC 1-71 on tristimulus integration in focusing on ongoing methods, testing, and recommendations.

  12. Computer model of the MFTF-B neutral beam Accel dc power supply

    SciTech Connect

    Wilson, J.H.

    1983-11-30

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel dc Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of huge increases in computing time that result. The model has been successfully extended to include the accel modulator.

  13. Computed lateral power spectral density response of conventional and STOL airplanes to random atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1974-01-01

    A method of computing the power spectral densities of the lateral response of airplanes to random atmospheric turbulence was adapted to an electronic digital computer. By use of this program, the power spectral densities of the lateral roll, yaw, and sideslip angular displacement of several conventional and STOL airplanes were computed. The results show that for the conventional airplanes, the roll response is more prominent than that for yaw or sideslip response. For the STOL airplanes, on the other hand, the yaw and sideslip responses were larger than the roll response. The response frequency of the STOL airplanes generally is higher than that for the conventional airplanes. This combination of greater sensitivity of the STOL airplanes in yaw and sideslip and the frequency at which they occur could be a factor causing the poor riding qualities of this class of airplanes.

  14. Computer models and simulations of IGCC power plants with Canadian coals

    SciTech Connect

    Zheng, L.; Furimsky, E.

    1999-07-01

    In this paper, three steady state computer models for simulation of IGCC power plants with Shell, Texaco and BGL (British Gas Lurgi) gasifiers will be presented. All models were based on a study by Bechtel for Nova Scotia Power Corporation. They were built by using Advanced System for Process Engineering (ASPEN) steady state simulation software together with Fortran programs developed in house. Each model was integrated from several sections which can be simulated independently, such as coal preparation, gasification, gas cooling, acid gas removing, sulfur recovery, gas turbine, heat recovery steam generation, and steam cycle. A general description of each process, model's overall structure, capability, testing results, and background reference will be given. The performance of some Canadian coals on these models will be discussed as well. The authors also built a computer model of IGCC power plant with Kellogg-Rust-Westinghouse gasifier, however, due to limitation of paper length, it is not presented here.

  15. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  16. Computer-based procedure for field activities: Results from three evaluations at nuclear power plants

    SciTech Connect

    Oxstrand, Johanna; bly, Aaron; LeBlanc, Katya

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  17. Power and Performance Management in Nonlinear Virtualized Computing Systems via Predictive Control.

    PubMed

    Wen, Chengjian; Mu, Yifen

    2015-01-01

    The problem of power and performance management captures growing research interest in both academic and industrial field. Virtulization, as an advanced technology to conserve energy, has become basic architecture for most data centers. Accordingly, more sophisticated and finer control are desired in virtualized computing systems, where multiple types of control actions exist as well as time delay effect, which make it complicated to formulate and solve the problem. Furthermore, because of improvement on chips and reduction of idle power, power consumption in modern machines shows significant nonlinearity, making linear power models(which is commonly adopted in previous work) no longer suitable. To deal with this, we build a discrete system state model, in which all control actions and time delay effect are included by state transition and performance and power can be defined on each state. Then, we design the predictive controller, via which the quadratic cost function integrating performance and power can be dynamically optimized. Experiment results show the effectiveness of the controller. By choosing a moderate weight, a good balance can be achieved between performance and power: 99.76% requirements can be dealt with and power consumption can be saved by 33% comparing to the case with open loop controller.

  18. Power and Performance Management in Nonlinear Virtualized Computing Systems via Predictive Control

    PubMed Central

    Wen, Chengjian; Mu, Yifen

    2015-01-01

    The problem of power and performance management captures growing research interest in both academic and industrial field. Virtulization, as an advanced technology to conserve energy, has become basic architecture for most data centers. Accordingly, more sophisticated and finer control are desired in virtualized computing systems, where multiple types of control actions exist as well as time delay effect, which make it complicated to formulate and solve the problem. Furthermore, because of improvement on chips and reduction of idle power, power consumption in modern machines shows significant nonlinearity, making linear power models(which is commonly adopted in previous work) no longer suitable. To deal with this, we build a discrete system state model, in which all control actions and time delay effect are included by state transition and performance and power can be defined on each state. Then, we design the predictive controller, via which the quadratic cost function integrating performance and power can be dynamically optimized. Experiment results show the effectiveness of the controller. By choosing a moderate weight, a good balance can be achieved between performance and power: 99.76% requirements can be dealt with and power consumption can be saved by 33% comparing to the case with open loop controller. PMID:26225769

  19. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    SciTech Connect

    Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb; Purkayastha, Avi; Wunder, Nick

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  20. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  1. Computation of the Mutual Inductance between Air-Cored Coils of Wireless Power Transformer

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    Wireless power transfer system is a modern technology which allows the transfer of electric power between the air-cored coils of its transformer via high frequency magnetic fields. However, due to its coil separation distance and misalignment, maximum power transfer is not guaranteed. Based on a more efficient and general model available in the literature, rederived mathematical models for evaluating the mutual inductance between circular coils with and without lateral and angular misalignment are presented. Rather than presenting results numerically, the computed results are graphically implemented using MATLAB codes. The results are compared with the published ones and clarification regarding the errors made are presented. In conclusion, this study shows that power transfer efficiency of the system can be improved if a higher frequency alternating current is supplied to the primary coil, the reactive parts of the coils are compensated with capacitors and ferrite cores are added to the coils.

  2. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  3. Power- and space-efficient image computation with compressive processing: I. Background and theory

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2000-11-01

    Surveillance imaging applications on small autonomous imaging platforms present challenges of highly constrained power supply and form factor, with potentially demanding specifications for target detection and recognition. Absent of significant advances in image processing hardware, such power and space restrictions can imply severely limited computational capabilities. This holds especially for compute-intensive algorithms with high-precision fixed- or floating- point operations in deep pipelines that process large data streams. Such algorithms tend not to be amenable to small or simplified architectures involving (for example) reduced precision, reconfigurable logic, low-power gates, or energy recycling schemes. In this series of two papers, a technique of reduced-power computing called compressive processing (CXP) is presented and applied to several low- and mid-level computer vision operations. CXP computes over compressed data without resorting to intermediate decompression steps. As a result of fewer data due to compression, fewer operations are required by CXP than are required by computing over the corresponding uncompressed image. In several cases, CXP techniques yield speedups on the order of the compression ratio. Where lossy high-compression transforms are employed, it is often possible to use approximations to derive CXP operations to yield increased computational efficiency via a simplified mix of operations. The reduced work requirement, which follows directly from the presence of fewer data, also implies a reduced power requirement, especially if simpler operations are involved in compressive versus noncompressive operations. Several image processing algorithms (edge detection, morphological operations, and component labeling) are analyzed in the context of three compression transforms: vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST. The latter is a lossy high-compression transformation developed for underwater

  4. Accelerating the Gauss-Seidel Power Flow Solver on a High Performance Reconfigurable Computer

    SciTech Connect

    Byun, Jong-Ho; Ravindran, Arun; Mukherjee, Arindam; Joshi, Bharat; Chassin, David P.

    2009-09-01

    The computationally intensive power flow problem determines the voltage magnitude and phase angle at each bus in a power system for hundreds of thousands of buses under balanced three-phase steady-state conditions. We report an FPGA acceleration of the Gauss-Seidel based power flow solver employed in the transmission module of the GridLAB-D power distribution simulator and analysis tool. The prototype hardware is implemented on an SGI Altix-RASC system equipped with a Xilinx Virtex II 6000 FPGA. Due to capacity limitations of the FPGA, only the bus voltage calculations of the power network are implemented on hardware while the branch current calculations are implemented in software. For a 200,000 bus system, the bus voltage calculation on the FPGA achieves a 48x speed-up with PQ buses and a 62 times for PV over an equivalent sequential software implementation. The average overall speed up of the FPGA-CPU implementation with 100 iterations of the Gauss-Seidel power solver is 2.6x over a software implementation, with the branch calculations on the CPU accounting for 85% of the total execution time. The FPGA-CPU implementation also shows linear scaling with increase in the size of the input power network.

  5. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This... software elements if those systems include software. This RG is one of six RG revisions addressing...

  6. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  7. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    SciTech Connect

    Laros III, James H.; DeBonis, David; Grant, Ryan; Kelly, Suzanne M.; Levenhagen, Michael J.; Olivier, Stephen Lecler; Pedretti, Kevin

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  8. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    PubMed

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  9. Extragalactic gamma-ray signal from dark matter annihilation: a power spectrum based computation

    NASA Astrophysics Data System (ADS)

    Serpico, P. D.; Sefusatti, E.; Gustafsson, M.; Zaharijas, G.

    2012-03-01

    We revisit the computation of the extragalactic gamma-ray signal from cosmological dark matter annihilations. The prediction of this signal is notoriously model-dependent, due to different descriptions of the clumpiness of the dark matter distribution at small scales, responsible for an enhancement with respect to the smoothly distributed case. We show how a direct computation of this 'flux multiplier' in terms of the non-linear power spectrum offers a conceptually simpler approach and may ease some problems, such as the extrapolation issue. In fact, very simple analytical recipes to construct the power spectrum yield results similar to the popular Halo Model expectations, with a straightforward alternative estimate of errors. For this specific application, one also obviates the need of identifying (often literature-dependent) concepts entering the Halo Model, to compare different simulations.

  10. Analysis of contingency tables based on generalised median polish with power transformations and non-additive models.

    PubMed

    Klawonn, Frank; Jayaram, Balasubramaniam; Crull, Katja; Kukita, Akiko; Pessler, Frank

    2013-01-01

    Contingency tables are a very common basis for the investigation of effects of different treatments or influences on a disease or the health state of patients. Many journals put a strong emphasis on p-values to support the validity of results. Therefore, even small contingency tables are analysed by techniques like t-test or ANOVA. Both these concepts are based on normality assumptions for the underlying data. For larger data sets, this assumption is not so critical, since the underlying statistics are based on sums of (independent) random variables which can be assumed to follow approximately a normal distribution, at least for a larger number of summands. But for smaller data sets, the normality assumption can often not be justified. Robust methods like the Wilcoxon-Mann-Whitney-U test or the Kruskal-Wallis test do not lead to statistically significant p-values for small samples. Median polish is a robust alternative to analyse contingency tables providing much more insight than just a p-value. Median polish is a technique that provides more information than just a p-value. It explains the contingency table in terms of an overall effect, row and columns effects and residuals. The underlying model for median polish is an additive model which is sometimes too restrictive. In this paper, we propose two related approach to generalise median polish. A power transformation can be applied to the values in the table, so that better results for median polish can be achieved. We propose a graphical method how to find a suitable power transformation. If the original data should be preserved, one can apply other transformations - based on so-called additive generators - that have an inverse transformation. In this way, median polish can be applied to the original data, but based on a non-additive model. The non-linearity of such a model can also be visualised to better understand the joint effects of rows and columns in a contingency table.

  11. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  12. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    SciTech Connect

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.

  13. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    SciTech Connect

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  14. Stellar wind-magnetosphere interaction at exoplanets: computations of auroral radio powers

    NASA Astrophysics Data System (ADS)

    Nichols, J. D.; Milan, S. E.

    2016-09-01

    We present calculations of the auroral radio powers expected from exoplanets with magnetospheres driven by an Earth-like magnetospheric interaction with the solar wind. Specifically, we compute the twin cell-vortical ionospheric flows, currents, and resulting radio powers resulting from a Dungey cycle process driven by dayside and nightside magnetic reconnection, as a function of planetary orbital distance and magnetic field strength. We include saturation of the magnetospheric convection, as observed at the terrestrial magnetosphere, and we present power-law approximations for the convection potentials, radio powers and spectral flux densities. We specifically consider a solar-age system and a young (1 Gyr) system. We show that the radio power increases with magnetic field strength for magnetospheres with saturated convection potential, and broadly decreases with increasing orbital distance. We show that the magnetospheric convection at hot Jupiters will be saturated, and thus unable to dissipate the full available incident Poynting flux, such that the magnetic Radiometric Bode's Law (RBL) presents a substantial overestimation of the radio powers for hot Jupiters. Our radio powers for hot Jupiters are ˜5-1300 TW for hot Jupiters with field strengths of 0.1-10 BJ orbiting a Sun-like star, while we find that competing effects yield essentially identical powers for hot Jupiters orbiting a young Sun-like star. However, in particular, for planets with weaker magnetic fields, our powers are higher at larger orbital distances than given by the RBL, and there are many configurations of planet that are expected to be detectable using SKA.

  15. Reliable ISR algorithms for a very-low-power approximate computer

    NASA Astrophysics Data System (ADS)

    Eaton, Ross S.; McBride, Jonah C.; Bates, Joseph

    2013-05-01

    The Office of Naval Research (ONR) is looking for methods to perform higher levels of sensor processing onboard UAVs to alleviate the need to transmit full motion video to ground stations over constrained data links. Charles River Analytics is particularly interested in performing intelligence, surveillance, and reconnaissance (ISR) tasks using UAV sensor feeds. Computing with approximate arithmetic can provide 10,000x improvement in size, weight, and power (SWAP) over desktop CPUs, thereby enabling ISR processing onboard small UAVs. Charles River and Singular Computing are teaming on an ONR program to develop these low-SWAP ISR capabilities using a small, low power, single chip machine, developed by Singular Computing, with many thousands of cores. Producing reliable results efficiently on massively parallel approximate machines requires adapting the core kernels of algorithms. We describe a feature-aided tracking algorithm adapted for the novel hardware architecture, which will be suitable for use onboard a UAV. Tests have shown the algorithm produces results equivalent to state-of-the-art traditional approaches while achieving a 6400x improvement in speed/power ratio.

  16. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  17. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems... revision endorses, with clarifications, the enhanced consensus practices for testing of computer...

  18. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... COMMISSION Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1207, ``Test Documentation for Digital Computer Software used in Safety Systems of... software and computer systems as described in the Institute of Electrical and Electronics Engineers...

  19. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  20. Measured energy savings and performance of power-managed personal computers and monitors

    SciTech Connect

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two other research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.

  1. PARALIGN: rapid and sensitive sequence similarity searches powered by parallel computing technology

    PubMed Central

    Sæbø, Per Eystein; Andersen, Sten Morten; Myrseth, Jon; Laerdahl, Jon K.; Rognes, Torbjørn

    2005-01-01

    PARALIGN is a rapid and sensitive similarity search tool for the identification of distantly related sequences in both nucleotide and amino acid sequence databases. Two algorithms are implemented, accelerated Smith–Waterman and ParAlign. The ParAlign algorithm is similar to Smith–Waterman in sensitivity, while as quick as BLAST for protein searches. A form of parallel computing technology known as multimedia technology that is available in modern processors, but rarely used by other bioinformatics software, has been exploited to achieve the high speed. The software is also designed to run efficiently on computer clusters using the message-passing interface standard. A public search service powered by a large computer cluster has been set-up and is freely available at , where the major public databases can be searched. The software can also be downloaded free of charge for academic use. PMID:15980529

  2. PARALIGN: rapid and sensitive sequence similarity searches powered by parallel computing technology.

    PubMed

    Saebø, Per Eystein; Andersen, Sten Morten; Myrseth, Jon; Laerdahl, Jon K; Rognes, Torbjørn

    2005-07-01

    PARALIGN is a rapid and sensitive similarity search tool for the identification of distantly related sequences in both nucleotide and amino acid sequence databases. Two algorithms are implemented, accelerated Smith-Waterman and ParAlign. The ParAlign algorithm is similar to Smith-Waterman in sensitivity, while as quick as BLAST for protein searches. A form of parallel computing technology known as multimedia technology that is available in modern processors, but rarely used by other bioinformatics software, has been exploited to achieve the high speed. The software is also designed to run efficiently on computer clusters using the message-passing interface standard. A public search service powered by a large computer cluster has been set-up and is freely available at www.paralign.org, where the major public databases can be searched. The software can also be downloaded free of charge for academic use.

  3. Efficient Solvability of Hamiltonians and Limits on the Power of Some Quantum Computational Models

    NASA Astrophysics Data System (ADS)

    Somma, Rolando; Barnum, Howard; Ortiz, Gerardo; Knill, Emanuel

    2006-11-01

    One way to specify a model of quantum computing is to give a set of control Hamiltonians acting on a quantum state space whose initial state and final measurement are specified in terms of the Hamiltonians. We formalize such models and show that they can be simulated classically in a time polynomial in the dimension of the Lie algebra generated by the Hamiltonians and logarithmic in the dimension of the state space. This leads to a definition of Lie-algebraic “generalized mean-field Hamiltonians.” We show that they are efficiently (exactly) solvable. Our results generalize the known weakness of fermionic linear optics computation and give conditions on control needed to exploit the full power of quantum computing.

  4. Turbulence computations with 3-D small-scale additive turbulent decomposition and data-fitting using chaotic map combinations

    SciTech Connect

    Mukerji, Sudip

    1997-01-01

    Although the equations governing turbulent fluid flow, the Navier-Stokes (N.S.) equations, have been known for well over a century and there is a clear technological necessity in obtaining solutions to these equations, turbulence remains one of the principal unsolved problems in physics today. It is still not possible to make accurate quantitative predictions about turbulent flows without relying heavily on empirical data. In principle, it is possible to obtain turbulent solutions from a direct numerical simulation (DNS) of the N.-S. equations. The author first provides a brief introduction to the dynamics of turbulent flows. The N.-S. equations which govern fluid flow, are described thereafter. Then he gives a brief overview of DNS calculations and where they stand at present. He next introduces the two most popular approaches for doing turbulent computations currently in use, namely, the Reynolds averaging of the N.-S. equations (RANS) and large-eddy simulation (LES). Approximations, often ad hoc ones, are present in these methods because use is made of heuristic models for turbulence quantities (the Reynolds stresses) which are otherwise unknown. They then introduce a new computational method called additive turbulent decomposition (ATD), the small-scale version of which is the topic of this research. The rest of the thesis is organized as follows. In Chapter 2 he describes the ATD procedure in greater detail; how dependent variables are split and the decomposition into large- and small-scale sets of equations. In Chapter 3 the spectral projection of the small-scale momentum equations are derived in detail. In Chapter 4 results of the computations with the small-scale ATD equations are presented. In Chapter 5 he describes the data-fitting procedure which can be used to directly specify the parameters of a chaotic-map turbulence model.

  5. Nuclear power plant status diagnostics using simulated condensation: An auto-adaptive computer learning technique

    SciTech Connect

    Bartlett, E.B.

    1990-01-01

    The application of artificial neural network concepts to engineering analysis involves training networks, and therefore computers, to perform pattern classification or function mapping tasks. This training process requires the near optimization of network inter-neural connections. A new method for the stochastic optimization of these interconnections is presented in this dissertation. The new approach, called simulated condensation, is applied to networks of generalized, fully interconnected, continuous preceptrons. Simulated condensation optimizes the nodal bias, gain, and output activation constants as well as the usual interconnection weights. In this work, the simulated condensation network paradigm is applied to nuclear power plant operating status recognition. A set of standard problems such as the exclusive-or problem and others are also analyzed as benchmarks for the new methodology. The objective of the nuclear power plant accidient condition diagnosis effort is to train a network to identify both safe and potentially unsafe power plant conditions based on real time plant data. The data is obtained from computer generated accident scenarios. A simulated condensation network is trained to recognize seven nuclear power plant accident conditions as well as the normal full power operating condition. These accidents include, hot and cold leg loss of coolant, control rod ejection and steam generator tube leak accidents. Twenty-seven plant process variables are used as input to the neural network. Results show the feasibility of using simulated condensation as a method for diagnosing nuclear power plant conditions. The method is general and can easily be applied to other types of plants and plant processes.

  6. Building ceramics with an addition of pulverized combustion fly ash from the thermal power plant Nováky

    NASA Astrophysics Data System (ADS)

    Húlan, Tomáš; Trník, Anton; Medved, Igor; Štubňa, Igor; Kaljuvee, Tiit

    2016-07-01

    Pulverized combustion fly ash (PFA) from the Power plant Nováky (Slovakia) is analyzed for its potential use in the production of building ceramics. Three materials are used to prepare the mixtures: illite-rich clay (IRC), PFA and IRC fired at 1000 °C (called grog). The mixtures contain 60 % of IRC and 40 % of a non-plastic compound (grog or PFA). A various amount of the grog is replaced by PFA and the effect of this substitution is studied. Thermal analyses (TGA, DTA, thermodilatometry, and dynamical thermomechanical analysis) are used to analyze the processes occurring during firing. The flexural strength and thermal conductivity are determined at room temperature after firing in the temperature interval from 800 to 1100 °C. The results show that an addition of PFA slightly decreases the flexural strength. The thermal conductivity and porosity are practically unaffected by the presence of PFA. Thus, PFA from the Power plant Nováky is a convenient non-plastic component for manufacturing building ceramics.

  7. Computer Assisted Fluid Power Instruction: A Comparison of Hands-On and Computer-Simulated Laboratory Experiences for Post-Secondary Students

    ERIC Educational Resources Information Center

    Wilson, Scott B.

    2005-01-01

    The primary purpose of this study was to examine the effectiveness of utilizing a combination of lecture and computer resources to train personnel to assume roles as hydraulic system technicians and specialists in the fluid power industry. This study compared computer simulated laboratory instruction to traditional hands-on laboratory instruction,…

  8. Linking process, structure, property, and performance for metal-based additive manufacturing: computational approaches with experimental support

    NASA Astrophysics Data System (ADS)

    Smith, Jacob; Xiong, Wei; Yan, Wentao; Lin, Stephen; Cheng, Puikei; Kafka, Orion L.; Wagner, Gregory J.; Cao, Jian; Liu, Wing Kam

    2016-04-01

    Additive manufacturing (AM) methods for rapid prototyping of 3D materials (3D printing) have become increasingly popular with a particular recent emphasis on those methods used for metallic materials. These processes typically involve an accumulation of cyclic phase changes. The widespread interest in these methods is largely stimulated by their unique ability to create components of considerable complexity. However, modeling such processes is exceedingly difficult due to the highly localized and drastic material evolution that often occurs over the course of the manufacture time of each component. Final product characterization and validation are currently driven primarily by experimental means as a result of the lack of robust modeling procedures. In the present work, the authors discuss primary detrimental hurdles that have plagued effective modeling of AM methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles. The primary focus of this work encompasses the specific areas of high-performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.

  9. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  10. High accuracy digital image correlation powered by GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Zhang, Lingqi; Wang, Tianyi; Jiang, Zhenyu; Kemao, Qian; Liu, Yiping; Liu, Zejia; Tang, Liqun; Dong, Shoubin

    2015-06-01

    A sub-pixel digital image correlation (DIC) method with a path-independent displacement tracking strategy has been implemented on NVIDIA compute unified device architecture (CUDA) for graphics processing unit (GPU) devices. Powered by parallel computing technology, this parallel DIC (paDIC) method, combining an inverse compositional Gauss-Newton (IC-GN) algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU. In the experiments using simulated and real speckle images, the paDIC reaches a computation speed of 1.66×105 POI/s (points of interest per second) and 1.13×105 POI/s respectively, 57-76 times faster than its sequential counterpart, without the sacrifice of accuracy and precision. To the best of our knowledge, it is the fastest computation speed of a sub-pixel DIC method reported heretofore.

  11. A 10-kW SiC Inverter with A Novel Printed Metal Power Module With Integrated Cooling Using Additive Manufacturing

    SciTech Connect

    Chinthavali, Madhu Sudhan; Ayers, Curtis William; Campbell, Steven L; Wiles, Randy H; Ozpineci, Burak

    2014-01-01

    With efforts to reduce the cost, size, and thermal management systems for the power electronics drivetrain in hybrid electric vehicles (HEVs) and plug-in hybrid electric vehicles (PHEVs), wide band gap semiconductors including silicon carbide (SiC) have been identified as possibly being a partial solution. This paper focuses on the development of a 10-kW all SiC inverter using a high power density, integrated printed metal power module with integrated cooling using additive manufacturing techniques. This is the first ever heat sink printed for a power electronics application. About 50% of the inverter was built using additive manufacturing techniques.

  12. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Wen

    2016-06-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using two-tier items were conducted on Grade 4 ( n = 90) and Grade 5 ( n = 86) students, respectively. One-way ANCOVA was conducted to investigate whether the different assessment formats affected these students' posttest scores on both the phenomenon and reason tiers, and confidence rating for an answer was assessed to diagnose the nature of students' responses (i.e., scientific answer, guessing, alternative conceptions, or knowledge deficiency). Follow-up interview was adopted to explore whether and how the various CBA representations influenced both graders' responses. Results showed that the CBA, in particular the dynamic representation format, allowed students who lacked prior knowledge (Grade 4) to easily understand the question stems. The various CBA representations also potentially encouraged students who already had learning experience (Grade 5) to enhance the metacognitive judgment of their responses. Therefore, CBA could reduce students' use of test-taking strategies and provide better diagnostic power for a two-tier instrument than the traditional paper-based version.

  13. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  14. Computational modeling of pulsed-power-driven magnetized target fusion experiments

    SciTech Connect

    Sheehey, P.; Kirkpatrick, R.; Lindemuth, I.

    1995-08-01

    Direct magnetic drive using electrical pulsed power has been considered impractically slow for traditional inertial confinement implosion of fusion targets. However, if the target contains a preheated, magnetized plasma, magnetothermal insulation may allow the near-adiabatic compression of such a target to fusion conditions on a much slower time scale. 100-MJ-class explosive flux compression generators with implosion kinetic energies far beyond those available with conventional fusion drivers, are an inexpensive means to investigate such magnetized target fusion (MTF) systems. One means of obtaining the preheated and magnetized plasma required for an MTF system is the recently reported {open_quotes}MAGO{close_quotes} concept. MAGO is a unique, explosive-pulsed-power driven discharge in two cylindrical chambers joined by an annular nozzle. Joint Russian-American MAGO experiments have reported D-T neutron yields in excess of 10{sup 13} from this plasma preparation stage alone, without going on to the proposed separately driven NM implosion of the main plasma chamber. Two-dimensional MED computational modeling of MAGO discharges shows good agreement to experiment. The calculations suggest that after the observed neutron pulse, a diffuse Z-pinch plasma with temperature in excess of 100 eV is created, which may be suitable for subsequent MTF implosion, in a heavy liner magnetically driven by explosive pulsed power. Other MTF concepts, such as fiber-initiated Z-pinch target plasmas, are also being computationally and theoretically evaluated. The status of our modeling efforts will be reported.

  15. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods

  16. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  17. Computer modeling of a regenerative solar-assisted Rankine power cycle

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1977-01-01

    A detailed interpretation of the computer program that describes the performance of one of these cycles; namely, a regenerative Rankine power cycle is presented. Water is used as the working medium throughout the cycle. The solar energy collected at relatively low temperature level presents 75 to 80% of the total heat demand and provides mainly the latent heat of vaporization. Another energy source at high temperature level superheats the steam and supplements the solar energy share. A program summary and a numerical example showing the sequency of computations are included. The outcome from the model comprises line temperatures, component heat rates, specific steam consumption, percentage of solar energy contribution, and the overall thermal efficiency.

  18. Hypertext and Three-Dimensional Computer Graphics: A Powerful Teaching Team

    PubMed Central

    Schwarz, Daniel L.; Wind, Gary G.

    1990-01-01

    In an effort to combat the frustration experienced by educators and students alike with the volume of factual and practical information the student is expected to learn and apply, new medical teaching methods are being developed. Research in the field of adult education techniques has determined which teaching methods best maintain learner interest and result in the greatest knowledge retention. Electronic data processing technology allows for the rapid organization and specific acquisition of vast amounts of information. Personal computers have become powerful enough to provide simple high-level interactivity and brilliantly detailed graphic displays. The Center for Graphic Medical Communication seeks to combine multiple electronic modalities including three-dimensional computer garphics, animation, hypertext, video disk, CD-ROM and CD-I to maximize the effeciency of teaching and learning anatomical concepts. ImagesFigure 1Figure 2

  19. Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Dress, D. A.

    1985-01-01

    A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.

  20. Assessment of computer codes for VVER-440/213-type nuclear power plants

    SciTech Connect

    Szabados, L.; Ezsol, Gy.; Perneczky

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  1. Computational Fluid Dynamics Simulation Study of Active Power Control in Wind Plants

    SciTech Connect

    Fleming, Paul; Aho, Jake; Gebraad, Pieter; Pao, Lucy; Zhang, Yingchen

    2016-08-01

    This paper presents an analysis performed on a wind plant's ability to provide active power control services using a high-fidelity computational fluid dynamics-based wind plant simulator. This approach allows examination of the impact on wind turbine wake interactions within a wind plant on performance of the wind plant controller. The paper investigates several control methods for improving performance in waked conditions. One method uses wind plant wake controls, an active field of research in which wind turbine control systems are coordinated to account for their wakes, to improve the overall performance. Results demonstrate the challenge of providing active power control in waked conditions but also the potential methods for improving this performance.

  2. GridPACK™ : A Framework for Developing Power Grid Simulations on High-Performance Computing Platforms

    SciTech Connect

    Palmer, Bruce J.; Perkins, William A.; Chen, Yousu; Jin, Shuangshuang; Callahan, David; Glass, Kevin A.; Diao, Ruisheng; Rice, Mark J.; Elbert, Stephen T.; Vallem, Mallikarjuna R.; Huang, Zhenyu

    2016-05-01

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on high performance computers. The framework makes extensive use of software templates to provide high level functionality while at the same time allowing developers the freedom to express whatever models and algorithms they are using. GridPACK™ contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors and using parallel linear and non-linear solvers to solve algebraic equations. It also provides mappers to create matrices and vectors based on properties of the network and functionality to support IO and to mana

  3. Computer-controlled, variable-frequency power supply for driving multipole ion guides

    NASA Astrophysics Data System (ADS)

    Robbins, Matthew D.; Yoon, Oh Kyu; Zuleta, Ignacio; Barbula, Griffin K.; Zare, Richard N.

    2008-03-01

    A high voltage, variable-frequency driver circuit for powering resonant multipole ion guides is presented. Two key features of this design are (1) the use of integrated circuits in the driver stage and (2) the use a stepper motor for tuning a large variable capacitor in the resonant stage. In the present configuration the available frequency range spans a factor of 2. The actual values of the minimum and maximum frequencies depend on the chosen inductor and the capacitance of the ion guide. Feedback allows for stabilized, computer-adjustable rf amplitudes over the range of 5-500V. The rf power supply was characterized over the range of 350-750kHz and evaluated by driving a quadrupole ion guide in an electrospray time-of-flight mass spectrometer.

  4. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-04

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  5. Effects of protonation and C5 methylation on the electrophilic addition reaction of cytosine: a computational study.

    PubMed

    Jin, Lingxia; Wang, Wenliang; Hu, Daodao; Min, Suotian

    2013-01-10

    The mechanism for the effects of protonation and C5 methylation on the electrophilic addition reaction of Cyt has been explored by means of CBS-QB3 and CBS-QB3/PCM methods. In the gas phase, three paths, two protonated paths (N3 and O2 protonated paths B and C) as well as one neutral path (path A), were mainly discussed, and the calculated results indicate that the reaction of the HSO(3)(-) group with neutral Cyt is unlikely because of its high activation free energy, whereas O2-protonated path (path C) is the most likely to occur. In the aqueous phase, path B is the most feasible mechanism to account for the fact that the activation free energy of path B decreases compared with the corresponding path in the gas phase, whereas those of paths A and C increase. The main striking results are that the HSO(3)(-) group directly interacts with the C5═C6 bond rather than the N3═C4 bond and that the C5 methylation, compared with Cyt, by decreasing values of global electrophilicity index manifests that C5 methylation forms are less electrophilic power as well as by decreasing values of NPA charges on C5 site of the intermediates make the trend of addition reaction weaken, which is in agreement with the experimental observation that the rate of 5-MeCyt reaction is approximately 2 orders of magnitude slower than that of Cyt in the presence of bisulfite. Apart from cis and trans isomers, the rare third isomer where both the CH(3) and SO(3) occupy axial positions has been first found in the reactions of neutral and protonated 5-MeCyt with the HSO(3)(-) group. Furthermore, the transformation of the third isomer from the cis isomer can occur easily.

  6. Computer program for thermodynamic analysis of open cycle multishaft power system with multiple reheat and intercool

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1974-01-01

    A computer program to analyze power systems having any number of shafts up to a maximum of five is presented. On each shaft there can be as many as five compressors and five turbines, along with any specified number of intervening intercoolers and reheaters. A recuperator can be included. Turbine coolant flow can be accounted for. Any fuel consisting entirely of hydrogen and/or carbon can be used. The program is valid for maximum temperatures up to about 2000 K (3600 R). The system description, the analysis method, a detailed explanation of program input and output including an illustrative example, a dictionary of program variables, and the program listing are explained.

  7. Computer Simulation Of A CO2 High Power Laser With Folded Resonator

    NASA Astrophysics Data System (ADS)

    Meisterhofer, E.; Lippitsch, M. E.

    1984-03-01

    Based on the iterative solution of a generalized Kirchhoff-Fresnel integral equation we have developed a computer model for realistic simulation of arbitrary linear or folded resonators. With known parameters of the active medium (small signal gain, saturation intensity, volume) we can determine the optimal parameters for the resonator (e.g. out-put mirror transmission, radius of curvature of mirrors, diameter and place of diaphragms, length of resonator) to get highest output power with a certain mode pattern. The model is tested for linear as well as folded resonators.

  8. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  9. Computation of the power spectrum in chaotic ¼λφ{sup 4} inflation

    SciTech Connect

    Rojas, Clara; Villalba, Víctor M. E-mail: Victor.Villalba@monash.edu

    2012-01-01

    The phase-integral approximation devised by Fröman and Fröman, is used for computing cosmological perturbations in the quartic chaotic inflationary model. The phase-integral formulas for the scalar power spectrum are explicitly obtained up to fifth order of the phase-integral approximation. As in previous reports (Rojas 2007b, 2007c and 2009), we point out that the accuracy of the phase-integral approximation compares favorably with the numerical results and those obtained using the slow-roll and uniform approximation methods.

  10. Analysis and Design of Bridgeless Switched Mode Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, S.; Bhuvaneswari, G.; Singh, B.

    2014-09-01

    Switched mode power supplies (SMPSs) used in computers need multiple isolated and stiffly regulated output dc voltages with different current ratings. These isolated multiple output dc voltages are obtained by using a multi-winding high frequency transformer (HFT). A half-bridge dc-dc converter is used here for obtaining different isolated and well regulated dc voltages. In the front end, non-isolated Single Ended Primary Inductance Converters (SEPICs) are added to improve the power quality in terms of low input current harmonics and high power factor (PF). Two non-isolated SEPICs are connected in a way to completely eliminate the need of single-phase diode-bridge rectifier at the front end. Output dc voltages at both the non-isolated and isolated stages are controlled and regulated separately for power quality improvement. A voltage mode control approach is used in the non-isolated SEPIC stage for simple and effective control whereas average current control is used in the second isolated stage.

  11. Unraveling the fundamental mechanisms of solvent-additive-induced optimization of power conversion efficiencies in organic photovoltaic devices

    SciTech Connect

    Herath, Nuradhika; Das, Sanjib; Zhu, Jiahua; Kumar, Rajeev; Chen, Jihua; Xiao, Kai; Gu, Gong; Browning, James F.; Sumpter, Bobby G.; Ivanov, Ilia N.; Lauter, Valeria

    2016-07-12

    The realization of controllable morphologies of bulk heterojunction (BHJ) in organic photovoltics (OPVs) is one of the key factors in obtaining high-efficiency devices. Here via simultaneous monitoring of the three-dimensional nanostructural modifications in BHJ correlated with the optical analysis and theoretical modeling of charge transport, we provide new insights into the fundamental mechanisms essential for the optimization of (power conversion efficiency) PCEs with additive processing. Our results demonstrate how a trace amount of diiodooctane (DIO) remarkably changes the vertical phase morphology of the active layers resulting in formation of a well-mixed donor-acceptor compact film, augments charge transfer and PCEs. In contrast, excess amount of DIO promotes a massive reordering and results loosely packed mixed phase vertical phase morphology with large clusters leading to deterioration in PCEs. Theoretical modeling of charge transport reveals that DIO increases the mobility of electrons and holes (the charge carriers) by affecting the energetic disorder and electric field dependence of the mobility. Our results show the significant of phase separation and carrier transport pathways to achieve optimal device performances.

  12. Unraveling the fundamental mechanisms of solvent-additive-induced optimization of power conversion efficiencies in organic photovoltaic devices

    DOE PAGES

    Herath, Nuradhika; Das, Sanjib; Zhu, Jiahua; ...

    2016-07-12

    The realization of controllable morphologies of bulk heterojunction (BHJ) in organic photovoltics (OPVs) is one of the key factors in obtaining high-efficiency devices. Here via simultaneous monitoring of the three-dimensional nanostructural modifications in BHJ correlated with the optical analysis and theoretical modeling of charge transport, we provide new insights into the fundamental mechanisms essential for the optimization of (power conversion efficiency) PCEs with additive processing. Our results demonstrate how a trace amount of diiodooctane (DIO) remarkably changes the vertical phase morphology of the active layers resulting in formation of a well-mixed donor-acceptor compact film, augments charge transfer and PCEs. Inmore » contrast, excess amount of DIO promotes a massive reordering and results loosely packed mixed phase vertical phase morphology with large clusters leading to deterioration in PCEs. Theoretical modeling of charge transport reveals that DIO increases the mobility of electrons and holes (the charge carriers) by affecting the energetic disorder and electric field dependence of the mobility. Our results show the significant of phase separation and carrier transport pathways to achieve optimal device performances.« less

  13. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 2: SYSTID user's guide

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.

  14. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10‑13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  15. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    PubMed Central

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-01-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10−13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators. PMID:27457034

  16. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    PubMed Central

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational

  17. Measurement of rotary pump flow and pressure by computation of driving motor power and speed.

    PubMed

    Qian, K X; Zeng, P; Ru, W M; Yuan, H Y; Feng, Z G; Li, L

    2000-01-01

    Measurement of pump flow and pressure by ventricular assist is an important process, but difficult to achieve. On one hand, the pump flow and pressure are indicators of pump performance and the physiologic status of the receptor, meanwhile providing a control basis of the blood pump itself. On the other hand, the direct measurement forces the receptor to connect with a flow meter and a manometer, and the sensors of these meters may cause haematological problems and increase the danger of infection. A novel method for measuring flow rate and pressure of rotary pump has been developed recently. First the pump performs at several rotating speeds, and at each speed the flow rate, pump head and the motor power (voltage x current) are recorded and shown in diagrams, thus obtaining P (motor power)-Q (pump volume) curves as well as P-H (pump head) curves. Secondly, the P, n (rotating speed) values are loaded into the input layer of a 3-layer BP (back propagation) neural network and the Q and H values into the output layer, to convert P-Q and P-H relations into Q = f (P,n) and H = g (P, n) functions. Thirdly, these functions are stored by computer to establish a database as an archive of this pump. Finally, the pump flow and pressure can be computed from motor power and speed during animal experiments or clinical trials. This new method was used in the authors' impeller pump. The results demonstrated that the error for pump head was less than 2% and that for pump flow was under 5%, so its accuracy is better than that of non-invasive measuring methods.

  18. The Effect of Emphasizing Mathematical Structure in the Acquisition of Whole Number Computation Skills (Addition and Subtraction) By Seven- and Eight-Year Olds: A Clinical Investigation.

    ERIC Educational Resources Information Center

    Uprichard, A. Edward; Collura, Carolyn

    This investigation sought to determine the effect of emphasizing mathematical structure in the acquisition of computational skills by seven- and eight-year-olds. The meaningful development-of-structure approach emphasized closure, commutativity, associativity, and the identity element of addition; the inverse relationship between addition and…

  19. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    SciTech Connect

    Abhyankar, Shrirang; Anitescu, Mihai; Constantinescu, Emil; Zhang, Hong

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  20. Stochastic optimal control methods for investigating the power of morphological computation.

    PubMed

    Rückert, Elmar A; Neumann, Gerhard

    2013-01-01

    One key idea behind morphological computation is that many difficulties of a control problem can be absorbed by the morphology of a robot. The performance of the controlled system naturally depends on the control architecture and on the morphology of the robot. Because of this strong coupling, most of the impressive applications in morphological computation typically apply minimalistic control architectures. Ideally, adapting the morphology of the plant and optimizing the control law interact so that finally, optimal physical properties of the system and optimal control laws emerge. As a first step toward this vision, we apply optimal control methods for investigating the power of morphological computation. We use a probabilistic optimal control method to acquire control laws, given the current morphology. We show that by changing the morphology of our robot, control problems can be simplified, resulting in optimal controllers with reduced complexity and higher performance. This concept is evaluated on a compliant four-link model of a humanoid robot, which has to keep balance in the presence of external pushes.

  1. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    SciTech Connect

    Chika Nwankpa

    2011-01-31

    This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

  2. Power of screening tests for colorectal cancer enhanced by high levels of M2-PK in addition to FOBT.

    PubMed

    Zaccaro, Cristina; Saracino, Ilaria Maria; Fiorini, Giulia; Figura, Natale; Holton, John; Castelli, Valentina; Pesci, Valeria; Gatta, Luigi; Vaira, Dino

    2017-02-02

    Colorectal cancer (CRC) is a multistep process that involves adenoma-carcinoma sequence. CRC can be prevented by routine screening, which can detect precancerous lesions. The aim of this study is to clarify whether faecal occult blood test (i-FOBT), tumor M2 pyruvate kinase (t-M2-PK), and endocannabinoid system molecules (cannabinoid receptors type 1-CB1, type 2-CB2, and fatty acid amide hydrolase-FAAH) might represent better diagnostic tools, alone or in combination, for an early diagnosis of CRC. An immunochemical FOB test (i-FOBT) and quantitative ELISA stool test for t-M2-PK were performed in 127 consecutive patients during a 12 month period. Endocannabinoid system molecules and t-M2-PK expression were detected by immunostaining in healthy tissues and normal mucosa surrounding adenomatous and cancerous colon lesions. i-FOBT and t-M2-PK combination leads to a better diagnostic accuracy for pre-neoplastic and neoplastic colon lesions. T-M2-PK quantification in stool samples and in biopsy samples (immunostaining) correlates with tumourigenesis stages. CB1 and CB2 are well expressed in healthy tissues, and their expression decreases in the presence of advanced stages of carcinogenesis and disappears in CRC. FAAH signal is well expressed in normal mucosa and low-risk adenoma, and increased in high-risk adenoma and carcinoma adjacent tissues. This study shows that high levels of t-M2-PK in addition to FOBT enhance the power of a CRC screening test. Endocannabinoid system molecule expression correlates with colon carcinogenesis stages. Developing future faecal tests for their quantification must be undertaken to obtain a more accurate early non-invasive diagnosis for CRC.

  3. Direct Methanol Fuel Cell Power Supply For All-Day True Wireless Mobile Computing

    SciTech Connect

    Brian Wells

    2008-11-30

    PolyFuel has developed state-of-the-art portable fuel cell technology for the portable computing market. A novel approach to passive water recycling within the MEA has led to significant system simplification and size reduction. Miniature stack technology with very high area utilization and minimalist seals has been developed. A highly integrated balance of plant with very low parasitic losses has been constructed around the new stack design. Demonstration prototype systems integrated with laptop computers have been shown in recent months to leading OEM computer manufacturers. PolyFuel intends to provide this technology to its customers as a reference design as a means of accelerating the commercialization of portable fuel cell technology. The primary goal of the project was to match the energy density of a commercial lithium ion battery for laptop computers. PolyFuel made large strides against this goal and has now demonstrated 270 Wh/liter compared with lithium ion energy densities of 300 Wh/liter. Further, more incremental, improvements in energy density are envisioned with an additional 20-30% gains possible in each of the next two years given further research and development.

  4. Invited Paper How The Personal Computer Has Expanded The Power Of Commercial Infrared Thermal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Kaplan, Herbert

    1987-11-01

    Ten years ago infrared imaging systems available on the commercial market had reached a point in their development where accuracy, speed, thermal sensitivity and spatial resolution were sufficient to meet the vast majority of measurement requirements. They were severely limited in application potential, however, because the images produced by even the highest performing systems appeared on oscilloscope displays or Polaroid prints with no further image or data analysis offered. The development of the personal desk-top computer and its marriage to the commercial infrared imager was the key to an applications explosion for these systems. The addition of compatible videocassette recorders added even more to their versatility. This paper will trace the development of commercial infrared thermal imaging systems since the advent of the personal computer, provide an overview of some of the more outstanding features available today and make some projections into future capabilities.

  5. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  6. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  7. Computed tomography-based diagnosis of diffuse compensatory enlargement of coronary arteries using scaling power laws

    PubMed Central

    Huo, Yunlong; Choy, Jenny Susana; Wischgoll, Thomas; Luo, Tong; Teague, Shawn D.; Bhatt, Deepak L.; Kassab, Ghassan S.

    2013-01-01

    Glagov's positive remodelling in the early stages of coronary atherosclerosis often results in plaque rupture and acute events. Because positive remodelling is generally diffused along the epicardial coronary arterial tree, it is difficult to diagnose non-invasively. Hence, the objective of the study is to assess the use of scaling power law for the diagnosis of positive remodelling of coronary arteries based on computed tomography (CT) images. Epicardial coronary arterial trees were reconstructed from CT scans of six Ossabaw pigs fed on a high-fat, high-cholesterol, atherogenic diet for eight months as well as the same number of body-weight-matched farm pigs fed on a lean chow (101.9±16.1 versus 91.5±13.1 kg). The high-fat diet Ossabaw pig model showed diffuse positive remodelling of epicardial coronary arteries. Good fit of measured coronary data to the length–volume scaling power law ( where Lc and Vc are crown length and volume) were found for both the high-fat and control groups (R2 = 0.95±0.04 and 0.99±0.01, respectively). The coefficient, KLV, decreased significantly in the high-fat diet group when compared with the control (14.6±2.6 versus 40.9±5.6). The flow–length scaling power law, however, was nearly unaffected by the positive remodelling. The length–volume and flow–length scaling power laws were preserved in epicardial coronary arterial trees after positive remodelling. KLV < 18 in the length–volume scaling relation is a good index of positive remodelling of coronary arteries. These findings provide a clinical rationale for simple, accurate and non-invasive diagnosis of positive remodelling of coronary arteries, using conventional CT scans. PMID:23365197

  8. A supervisor for the successive 3D computations of magnetic, mechanical and acoustic quantities in power oil inductors and transformers

    SciTech Connect

    Reyne, G.; Magnin, H.; Berliat, G.; Clerc, C.

    1994-09-01

    A supervisor has been developed so as to allow successive 3D computations of different quantities by different softwares on the same physical problem. Noise of a given power oil transformer can be deduced from the surface vibrations of the tank. These vibrations are obtained through a mechanic computation whose Inputs are the electromagnetic forces provided . by an electromagnetic computation. Magnetic, mechanic and acoustic experimental data are compared with the results of the 3D computations. Stress Is put on the main characteristics of the supervisor such as the transfer of a given quantity from one mesh to the other.

  9. Computer vision-guided robotic system for electrical power lines maintenance

    NASA Astrophysics Data System (ADS)

    Tremblay, Jack; Laliberte, T.; Houde, Regis; Pelletier, Michel; Gosselin, Clement M.; Laurendeau, Denis

    1995-12-01

    The paper presents several modules of a computer vision assisted robotic system for the maintenance of live electrical power lines. The basic scene of interest is composed of generic components such as a crossarm, a power line and a porcelain insulator. The system is under the supervision of an operator which validates each subtask. The system uses a 3D range finder mounted at the end effector of a 6 dof manipulator for the acquisition of range data on the scene. Since more than one view is required to obtain enough information on the scene, a view integration procedure is applied to the data in order to merge the information in a single reference frame. A volumetric description of the scene, in this case an octree, is built using the range data. The octree is transformed into an occupancy grid which is used for avoiding collisions between the manipulator and the components of the scene during the line manipulation step. The collision avoidance module uses the occupancy grid to create a discrete electrostatic potential field representing the various goals (e.g. objects of interest) and obstacles in the scene. The algorithm takes into account the articular limits of the robot and uses a redundant manipulator to ensure that the collision avoidance constraints do not compete with the task which is to reach a given goal with the end-effector. A pose determination algorithm called Iterative Closest Point is presented. The algorithm allows to compute the pose of the various components of the scene and allows the robot to manipulate these components safely. The system has been tested on an actual scene. The manipulation was successfully implemented using a synchronized geometry range finder mounted on a PUMA 760 robot manipulator under the control of Cartool.

  10. Computation of inflationary cosmological perturbations in the power-law inflationary model using the phase-integral method

    SciTech Connect

    Rojas, Clara; Villalba, Victor M.

    2007-03-15

    The phase-integral approximation devised by Froeman and Froeman, is used for computing cosmological perturbations in the power-law inflationary model. The phase-integral formulas for the scalar and tensor power spectra are explicitly obtained up to ninth-order of the phase-integral approximation. We show that, the phase-integral approximation exactly reproduces the shape of the power spectra for scalar and tensor perturbations as well as the spectral indices. We compare the accuracy of the phase-integral approximation with the results for the power spectrum obtained with the slow-roll and uniform-approximation methods.

  11. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    NASA Astrophysics Data System (ADS)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  12. SAMPSON Parallel Computation for Sensitivity Analysis of TEPCO's Fukushima Daiichi Nuclear Power Plant Accident

    NASA Astrophysics Data System (ADS)

    Pellegrini, M.; Bautista Gomez, L.; Maruyama, N.; Naitoh, M.; Matsuoka, S.; Cappello, F.

    2014-06-01

    On March 11th 2011 a high magnitude earthquake and consequent tsunami struck the east coast of Japan, resulting in a nuclear accident unprecedented in time and extents. After scram started at all power stations affected by the earthquake, diesel generators began operation as designed until tsunami waves reached the power plants located on the east coast. This had a catastrophic impact on the availability of plant safety systems at TEPCO's Fukushima Daiichi, leading to the condition of station black-out from unit 1 to 3. In this article the accident scenario is studied with the SAMPSON code. SAMPSON is a severe accident computer code composed of hierarchical modules to account for the diverse physics involved in the various phases of the accident evolution. A preliminary parallelization analysis of the code was performed using state-of-the-art tools and we demonstrate how this work can be beneficial to the nuclear safety analysis. This paper shows that inter-module parallelization can reduce the time to solution by more than 20%. Furthermore, the parallel code was applied to a sensitivity study for the alternative water injection into TEPCO's Fukushima Daiichi unit 3. Results show that the core melting progression is extremely sensitive to the amount and timing of water injection, resulting in a high probability of partial core melting for unit 3.

  13. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  14. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  15. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  16. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  17. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    SciTech Connect

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George; Holcomb, Matthew

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulation improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.

  18. Self-consistent computer model for the solar power satellite-plasma interaction

    SciTech Connect

    Cooke, D.L.

    1981-01-01

    A computer program (PANEL) has been developed to model the solar power satellite (SPS)-plasma interaction by an iterative solution of the coupled Poisson and Vlasov equations. PANEL uses the inside-out method and a finite difference scheme to calculate densities and potentials at selected points on either a two or three dimensional grid. The history of the spacecraft charging problem is reviewed, the theory of the plasma screening process is discussed and extended, program theory is developed, and a series of models is presented. These models are primarily two-dimensionl (2-D) for two reasons; one being that large 3-D models require too much computing time, and the other being that most analytic models suitable for testing PANEL are 1-D and the 3-D capabilities were not required. These models include PANEL's predictions for two variations on the Child-Langmuir diode problem and two models of the interaction of an infinitely long one meter wide solar array with a dense 10 eV plasma. These models are part of an ongoing effort to adapt PANEL to augment the laboratory studies of a 1 x 10 meter solar array in a simulated low Earth orbit plasma. Also included are two 3-D test models. One is a point potential in a hot plasma and is compared to the Debye theory of plasma screening. The other is a flat disc in charge free space. For the Child-Langmuir diode problem, a good agreement is obtained between PANEL results and the classical theory. This is viewed as a confirming test of PANEL. Conversely, in the solar array models, the agreement between the PANEL and Child-Langmuir predictions for the plasma sheath thickness is presented as a numerical confirmation of the use of the Child-Langmuir diode theory to estimate plasma sheath thickness in the spacecraft charging problem.

  19. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  20. Optimization of Acetylene Black Conductive Additive andPolyvinylidene Difluoride Composition for High Power RechargeableLithium-Ion Cells

    SciTech Connect

    Liu, G.; Zheng, H.; Battaglia, V.S.; Simens, A.S.; Minor, A.M.; Song, X.

    2007-07-01

    Fundamental electrochemical methods were applied to study the effect of the acetylene black (AB) and the polyvinylidene difluoride (PVDF) polymer binder on the performance of high-power designed rechargeable lithium ion cells. A systematic study of the AB/PVDF long-range electronic conductivity at different weight ratios is performed using four-probe direct current tests and the results reported. There is a wide range of AB/PVDF ratios that satisfy the long-range electronic conductivity requirement of the lithium-ion cathode electrode; however, a significant cell power performance improvement is observed at small AB/PVDF composition ratios that are far from the long-range conductivity optimum of 1 to 1.25. Electrochemical impedance spectroscopy (EIS) tests indicate that the interfacial impedance decreases significantly with increase in binder content. The hybrid power pulse characterization results agree with the EIS tests and also show improvement for cells with a high PVDF content. The AB to PVDF composition plays a significant role in the interfacial resistance. We believe the higher binder contents lead to a more cohesive conductive carbon particle network that results in better overall all local electronic conductivity on the active material surface and hence reduced charge transfer impedance.

  1. Digital computer study of nuclear reactor thermal transients during startup of 60-kWe Brayton power conversion system

    NASA Technical Reports Server (NTRS)

    Jefferies, K. S.; Tew, R. C.

    1974-01-01

    A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.

  2. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  3. Investigation of mass transfer intensification under power ultrasound irradiation using 3D computational simulation: A comparative analysis.

    PubMed

    Sajjadi, Baharak; Asgharzadehahmadi, Seyedali; Asaithambi, Perumal; Raman, Abdul Aziz Abdul; Parthasarathy, Rajarathinam

    2017-01-01

    This paper aims at investigating the influence of acoustic streaming induced by low-frequency (24kHz) ultrasound irradiation on mass transfer in a two-phase system. The main objective is to discuss the possible mass transfer improvements under ultrasound irradiation. Three analyses were conducted: i) experimental analysis of mass transfer under ultrasound irradiation; ii) comparative analysis between the results of the ultrasound assisted mass transfer with that obtained from mechanically stirring; and iii) computational analysis of the systems using 3D CFD simulation. In the experimental part, the interactive effects of liquid rheological properties, ultrasound power and superficial gas velocity on mass transfer were investigated in two different sonicators. The results were then compared with that of mechanical stirring. In the computational part, the results were illustrated as a function of acoustic streaming behaviour, fluid flow pattern, gas/liquid volume fraction and turbulence in the two-phase system and finally the mass transfer coefficient was specified. It was found that additional turbulence created by ultrasound played the most important role on intensifying the mass transfer phenomena compared to that in stirred vessel. Furthermore, long residence time which depends on geometrical parameters is another key for mass transfer. The results obtained in the present study would help researchers understand the role of ultrasound as an energy source and acoustic streaming as one of the most important of ultrasound waves on intensifying gas-liquid mass transfer in a two-phase system and can be a breakthrough in the design procedure as no similar studies were found in the existing literature.

  4. Effects of TiO2 and Co2O3 combination additions on the elemental distribution and electromagnetic properties of Mn-Zn power ferrites

    NASA Astrophysics Data System (ADS)

    Yang, W. D.; Wang, Y. G.

    2015-06-01

    The effects of TiO2 and Co2O3 combination additions on the elemental distribution and electromagnetic properties of Mn-Zn power ferrites are investigated. TiO2 addition can promote Co2O3 transfer from grain boundaries to the bulk of the grains. The temperature at which the highest initial permeability μi and the lowest power losses PL appear shifts to low temperature range with the increase of Co2O3 content. Compared with the reference sample without TiO2 and Co2O3 addition, the microstructure and electromagnetic properties of Mn-Zn power ferrites can be considerably improved with suitable amounts of TiO2 and Co2O3 combination additions. At the peak temperature, the sample with the 0.1 wt% TiO2 and 0.08 wt% Co2O3 additions has an increase of 15.8% in μi to 3951, and a decrease of 22.9% in PL to 286 kW/m3. The saturation magnetic induction Bs and electrical resistivity ρ at 25 °C reach the highest values of 532 mT and 8.12 Ω m, respectively.

  5. Technical basis for environmental qualification of computer-based safety systems in nuclear power plants

    SciTech Connect

    Korsah, K.; Wood, R.T.; Tanaka, T.J.; Antonescu, C.E.

    1997-10-01

    This paper summarizes the results of research sponsored by the US Nuclear Regulatory Commission (NRC) to provide the technical basis for environmental qualification of computer-based safety equipment in nuclear power plants. This research was conducted by the Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL). ORNL investigated potential failure modes and vulnerabilities of microprocessor-based technologies to environmental stressors, including electromagnetic/radio-frequency interference, temperature, humidity, and smoke exposure. An experimental digital safety channel (EDSC) was constructed for the tests. SNL performed smoke exposure tests on digital components and circuit boards to determine failure mechanisms and the effect of different packaging techniques on smoke susceptibility. These studies are expected to provide recommendations for environmental qualification of digital safety systems by addressing the following: (1) adequacy of the present preferred test methods for qualification of digital I and C systems; (2) preferred standards; (3) recommended stressors to be included in the qualification process during type testing; (4) resolution of need for accelerated aging in qualification testing for equipment that is to be located in mild environments; and (5) determination of an appropriate approach to address smoke in a qualification program.

  6. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  7. An application of computational fluid dynamics to the design of optimum ramjet powered missile components

    NASA Astrophysics Data System (ADS)

    Vanwie, David Michael

    In the design of a ramjet powered missile, the engineer is challenged with the development of engine components with optimum performance. The present research was aimed at the development of design tools which automatically generate aerodynamic geometries which are, in some sense, optimum. To accomplish the goal, computational fluid dynamic (CFD) tools were used to evaluate designs within a numerical optimization procedure. Starting from an initial design, the component geometry was modified continuously until a converged optimum was obtained. Numerous examples of the design procedure were completed, and, where possible, the results were validated against analysis results. The optimization procedure was shown to be practical in the design of minimum wave drag forebodies, maximum total pressure recovery inlets, and maximum thrust nozzles. In all example cases, the component performance was determined using MacCormack's explicit, finite-difference, marching technique to calculate the supersonic, inviscid flow-field around the component geometry. In the course of this research project, various nonlinear optimization procedures were investigated including the simplex method, the method of steepest descent, and the quasi-Newton method with BFGS updates. Questions concerning the figure of merit and off-design performance were also addressed.

  8. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  9. IMES-Ural: the system of the computer programs for operational analysis of power flow distribution using telemetric data

    SciTech Connect

    Bogdanov, V.A.; Bol'shchikov, A.A.; Zifferman, E.O.

    1981-02-01

    A system of computer programs was described which enabled the user to perform real-time calculation and analysis of the current flow in the 500 kV network of the Ural Regional Electric Power Plant for all possible variations of the network, based on teleinformation and correctable equivalent parameters of the 220 to 110 kV network.

  10. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  11. Agile Development of Various Computational Power Adaptive Web-Based Mobile-Learning Software Using Mobile Cloud Computing

    ERIC Educational Resources Information Center

    Zadahmad, Manouchehr; Yousefzadehfard, Parisa

    2016-01-01

    Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…

  12. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

    SciTech Connect

    Jaffe, L. D.

    1984-02-15

    CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

  13. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1984-01-01

    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  14. An Experimental and Computational Approach to Defining Structure/Reactivity Relationships for Intramolecular Addition Reactions to Bicyclic Epoxonium Ions

    PubMed Central

    Wan, Shuangyi; Gunaydin, Hakan; Houk, K. N.; Floreancig, Paul E.

    2008-01-01

    In this manuscript we report that oxidative cleavage reactions can be used to form oxocarbenium ions that react with pendent epoxides to form bicyclic epoxonium ions as an entry to the formation of cyclic oligoether compounds. Bicyclic epoxonium ion structure was shown to have a dramatic impact on the ratio of exo- to endo-cyclization reactions, with bicyclo[4.1.0] intermediates showing a strong preference for endo-closures and bicyclo[3.1.0] intermediates showing a preference for exo-closures. Computational studies on the structures and energetics of the transition states using the B3LYP/6-31G(d) method provide substantial insight into the origins of this selectivity. PMID:17547399

  15. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  16. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    PubMed

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  17. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  18. XOQDOQ: computer program for the meteorological evaluation of routine effluent releases at nuclear power stations. Final report

    SciTech Connect

    Sagendorf, J.F.; Goll, J.T.; Sandusky, W.F.

    1982-09-01

    Provided is a user's guide for the US Nuclear Regulatory Commission's (NRC) computer program X0QDOQ which implements Regulatory Guide 1.111. This NUREG supercedes NUREG-0324 which was published as a draft in September 1977. This program is used by the NRC meteorology staff in their independent meteorological evaluation of routine or anticipated intermittent releases at nuclear power stations. It operates in a batch input mode and has various options a user may select. Relative atmospheric dispersion and deposition factors are computed for 22 specific distances out to 50 miles from the site for each directional sector. From these results, values for 10 distance segments are computed. The user may also select other locations for which atmospheric dispersion deposition factors are computed. Program features, including required input data and output results, are described. A program listing and test case data input and resulting output are provided.

  19. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    SciTech Connect

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  20. Computer Model of Biopolymer Crystal Growth and Aggregation by Addition of Macromolecular Units — a Comparative Study

    NASA Astrophysics Data System (ADS)

    Siódmiak, J.; Gadomski, A.

    We discuss the results of a computer simulation of the biopolymer crystal growth and aggregation based on the 2D lattice Monte Carlo technique and the HP approximation of the biopolymers. As a modeled molecule (growth unit) we comparatively consider the previously studied non-mutant lysozyme protein, Protein Data Bank (PDB) ID: 193L, which forms, under a certain set of thermodynamic-kinetic conditions, the tetragonal crystals, and an amyloidogenic variant of the lysozyme, PDB ID: 1LYY, which is known as fibril-yielding and prone-to-aggregation agent. In our model, the site-dependent attachment, detachment and migration processes are involved. The probability of growth unit motion, attachment and detachment to/from the crystal surface are assumed to be proportional to the orientational factor representing the anisotropy of the molecule. Working within a two-dimensional representation of the truly three-dimensional process, we also argue that the crystal grows in a spiral way, whereby one or more screw dislocations on the crystal surface give rise to a terrace. We interpret the obtained results in terms of known models of crystal growth and aggregation such as B-C-F (Burton-Cabrera-Frank) dislocation driven growth and M-S (Mullins-Sekerka) instability concept, with stochastic aspects supplementing the latter. We discuss the conditions under which crystals vs non-crystalline protein aggregates appear, and how the process depends upon difference in chemical structure of the protein molecule seen as the main building block of the elementary crystal cell.

  1. First-order electroweak phase transition powered by additional F-term loop effects in an extended supersymmetric Higgs sector

    NASA Astrophysics Data System (ADS)

    Kanemura, Shinya; Senaha, Eibun; Shindou, Tetsuo

    2011-11-01

    We investigate the one-loop effect of new charged scalar bosons on the Higgs potential at finite temperatures in the supersymmetric standard model with four Higgs doublet chiral superfields as well as a pair of charged singlet chiral superfields. In this model, the mass of the lightest Higgs boson h is determined only by the D-term in the Higgs potential at the tree-level, while the triple Higgs boson coupling for hhh can receive a significant radiative correction due to nondecoupling one-loop contributions of the additional charged scalar bosons. We find that the same nondecoupling mechanism can also contribute to realize stronger first order electroweak phase transition than that in the minimal supersymmetric standard model, which is definitely required for a successful scenario of electroweak baryogenesis. Therefore, this model can be a new candidate for a model in which the baryon asymmetry of the Universe is explained at the electroweak scale.

  2. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    SciTech Connect

    Mayhall, D J; Stein, W; Gronberg, J B

    2006-05-15

    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  3. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster.

    PubMed

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control.

  4. Assessment of the Annual Additional Effective Doses amongst Minamisoma Children during the Second Year after the Fukushima Daiichi Nuclear Power Plant Disaster

    PubMed Central

    Tsubokura, Masaharu; Kato, Shigeaki; Morita, Tomohiro; Nomura, Shuhei; Kami, Masahiro; Sakaihara, Kikugoro; Hanai, Tatsuo; Oikawa, Tomoyoshi; Kanazawa, Yukio

    2015-01-01

    An assessment of the external and internal radiation exposure levels, which includes calculation of effective doses from chronic radiation exposure and assessment of long-term radiation-related health risks, has become mandatory for residents living near the nuclear power plant in Fukushima, Japan. Data for all primary and secondary children in Minamisoma who participated in both external and internal screening programs were employed to assess the annual additional effective dose acquired due to the Fukushima Daiichi nuclear power plant disaster. In total, 881 children took part in both internal and external radiation exposure screening programs between 1st April 2012 to 31st March 2013. The level of additional effective doses ranged from 0.025 to 3.49 mSv/year with the median of 0.70 mSv/year. While 99.7% of the children (n = 878) were not detected with internal contamination, 90.3% of the additional effective doses was the result of external radiation exposure. This finding is relatively consistent with the doses estimated by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The present study showed that the level of annual additional effective doses among children in Minamisoma has been low, even after the inter-individual differences were taken into account. The dose from internal radiation exposure was negligible presumably due to the success of contaminated food control. PMID:26053271

  5. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    PubMed

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2017-04-05

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  6. Impact of high microwave power on hydrogen impurity trapping in nanocrystalline diamond films grown with simultaneous nitrogen and oxygen addition into methane/hydrogen plasma

    NASA Astrophysics Data System (ADS)

    Tang, C. J.; Fernandes, A. J. S.; Jiang, X. F.; Pinto, J. L.; Ye, H.

    2016-01-01

    In this work, we study for the first time the influence of microwave power higher than 2.0 kW on bonded hydrogen impurity incorporation (form and content) in nanocrystalline diamond (NCD) films grown in a 5 kW MPCVD reactor. The NCD samples of different thickness ranging from 25 to 205 μm were obtained through a small amount of simultaneous nitrogen and oxygen addition into conventional about 4% methane in hydrogen reactants by keeping the other operating parameters in the same range as that typically used for the growth of large-grained polycrystalline diamond films. Specific hydrogen point defect in the NCD films is analyzed by using Fourier-transform infrared (FTIR) spectroscopy. When the other operating parameters are kept constant (mainly the input gases), with increasing of microwave power from 2.0 to 3.2 kW (the pressure was increased slightly in order to stabilize the plasma ball of the same size), which simultaneously resulting in the rise of substrate temperature more than 100 °C, the growth rate of the NCD films increases one order of magnitude from 0.3 to 3.0 μm/h, while the content of hydrogen impurity trapped in the NCD films during the growth process decreases with power. It has also been found that a new H related infrared absorption peak appears at 2834 cm-1 in the NCD films grown with a small amount of nitrogen and oxygen addition at power higher than 2.0 kW and increases with power higher than 3.0 kW. According to these new experimental results, the role of high microwave power on diamond growth and hydrogen impurity incorporation is discussed based on the standard growth mechanism of CVD diamonds using CH4/H2 gas mixtures. Our current experimental findings shed light into the incorporation mechanism of hydrogen impurity in NCD films grown with a small amount of nitrogen and oxygen addition into methane/hydrogen plasma.

  7. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  8. Neuro-Fuzzy Computational Technique to Control Load Frequency in Hydro-Thermal Interconnected Power System

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Sinha, S. K.

    2015-09-01

    In this research work, two areas hydro-thermal power system connected through tie-lines is considered. The perturbation of frequencies at the areas and resulting tie line power flows arise due to unpredictable load variations that cause mismatch between the generated and demanded powers. Due to rising and falling power demand, the real and reactive power balance is harmed; hence frequency and voltage get deviated from nominal value. This necessitates designing of an accurate and fast controller to maintain the system parameters at nominal value. The main purpose of system generation control is to balance the system generation against the load and losses so that the desired frequency and power interchange between neighboring systems are maintained. The intelligent controllers like fuzzy logic, artificial neural network (ANN) and hybrid fuzzy neural network approaches are used for automatic generation control for the two area interconnected power systems. Area 1 consists of thermal reheat power plant whereas area 2 consists of hydro power plant with electric governor. Performance evaluation is carried out by using intelligent (ANFIS, ANN and fuzzy) control and conventional PI and PID control approaches. To enhance the performance of controller sliding surface i.e. variable structure control is included. The model of interconnected power system has been developed with all five types of said controllers and simulated using MATLAB/SIMULINK package. The performance of the intelligent controllers has been compared with the conventional PI and PID controllers for the interconnected power system. A comparison of ANFIS, ANN, Fuzzy and PI, PID based approaches shows the superiority of proposed ANFIS over ANN, fuzzy and PI, PID. Thus the hybrid fuzzy neural network controller has better dynamic response i.e., quick in operation, reduced error magnitude and minimized frequency transients.

  9. Education/Technology/Power: Educational Computing as a Social Practice. SUNY Series, Frontiers in Education.

    ERIC Educational Resources Information Center

    Bromley, Hank, Ed.; Apple, Michael W., Ed.

    This book is organized in three parts that address the following broad topics related to educational computing: discursive practices, i.e., who speaks of educational computing and how (chapters 1-4); classroom practices (chapters 5-6); and democratic possibilities, i.e., the constructive potential of the technology (chapters 7-9). Following an…

  10. Modeling molecular computing systems by an artificial chemistry - its expressive power and application.

    PubMed

    Tominaga, Kazuto; Watanabe, Tooru; Kobayashi, Keiji; Nakamura, Masaki; Kishi, Koji; Kazuno, Mitsuyoshi

    2007-01-01

    Artificial chemistries are mainly used to construct virtual systems that are expected to show behavior similar to living systems. In this study, we explore possibilities of applying an artificial chemistry to modeling natural biochemical systems-or, to be specific, molecular computing systems-and show that it may be a useful modeling tool for molecular computation. We previously proposed an artificial chemistry based on string pattern matching and recombination. This article first demonstrates that this artificial chemistry is computationally universal if it has only rules that have one reactant or two reactants. We think this is a good property of an artificial chemistry that models molecular computing, because natural elementary chemical reactions, on which molecular computing is based, are mostly unimolecular or bimolecular. Then we give two illustrative example models for DNA computing in our artificial chemistry: one is for the type of computation called the Adleman-Lipton paradigm, and the other is for a DNA implementation of a finite automaton. Through the construction of these models we observe preferred properties of the artificial chemistry for modeling molecular computing, such as having no spatial structure and being flexible in choosing levels of abstraction.

  11. All-optical switching with bacteriorhodopsin protein coated microcavities and its application to low power computing circuits

    NASA Astrophysics Data System (ADS)

    Roy, Sukhdev; Prasad, Mohit; Topolancik, Juraj; Vollmer, Frank

    2010-03-01

    We show all-optical switching of an input infrared laser beam at 1310 nm by controlling the photoinduced retinal isomerization to tune the resonances in a silica microsphere coated with three bacteriorhodopsin (BR) protein monolayers. The all-optical tunable resonant coupler re-routes the infrared beam between two tapered fibers in 50 μs using a low power (<200 μW) green (532 nm) and blue (405 nm) pump beams. The basic switching configuration has been used to design all-optical computing circuits, namely, half and full adder/subtractor, de-multiplexer, multiplexer, and an arithmetic unit. The design requires 2n-1 switches to realize n bit computation. The designs combine the exceptional sensitivities of BR and high-Q microcavities and the versatile tree architecture for realizing low power circuits and networks (approximately mW power budget). The combined advantages of high Q-factor, tunability, compactness, and low power control signals, with the flexibility of cascading switches to form circuits, and reversibility and reconfigurability to realize arithmetic and logic functions, makes the designs promising for practical applications. The designs are general and can be implemented (i) in both fiber-optic and integrated optic formats, (ii) with any other coated photosensitive material, or (iii) any externally controlled microresonator switch.

  12. High SO{sub 2} removal efficiency testing: Results of DBA and sodium formate additive tests at Southwestern Electric Power company`s Pirkey Station

    SciTech Connect

    1996-05-30

    Tests were conducted at Southwestern Electric Power Company`s (SWEPCo) Henry W. Pirkey Station wet limestone flue gas desulfurization (FGD) system to evaluate options for achieving high sulfur dioxide removal efficiency. The Pirkey FGD system includes four absorber modules, each with dual slurry recirculation loops and with a perforated plate tray in the upper loop. The options tested involved the use of dibasic acid (DBA) or sodium formate as a performance additive. The effectiveness of other potential options was simulated with the Electric Power Research Institute`s (EPRI) FGD PRocess Integration and Simulation Model (FGDPRISM) after it was calibrated to the system. An economic analysis was done to determine the cost effectiveness of the high-efficiency options. Results are-summarized below.

  13. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State

    NASA Astrophysics Data System (ADS)

    Stoop, Ruedi; Gomez, Florian

    2016-07-01

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  14. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State.

    PubMed

    Stoop, Ruedi; Gomez, Florian

    2016-07-15

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  15. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  16. The Department of Defense and the Power of Cloud Computing: Weighing Acceptable Cost Versus Acceptable Risk

    DTIC Science & Technology

    2016-04-01

    DISA is leading the way for the development of a private DOD cloud computing environment in conjunction with the Army. Operational in 2008, DISA...significant opportunities and security challenges when implementing a cloud computing environment . The transformation of DOD information technology...is this shared pool of resources, espe- cially shared resources in a commercial environment , that also creates numerous risks not usually seen in

  17. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  18. Computational fluid dynamics study on mixing mode and power consumption in anaerobic mono- and co-digestion.

    PubMed

    Zhang, Yuan; Yu, Guangren; Yu, Liang; Siddhu, Muhammad Abdul Hanan; Gao, Mengjiao; Abdeltawab, Ahmed A; Al-Deyab, Salem S; Chen, Xiaochun

    2016-03-01

    Computational fluid dynamics (CFD) was applied to investigate mixing mode and power consumption in anaerobic mono- and co-digestion. Cattle manure (CM) and corn stover (CS) were used as feedstock and stirred tank reactor (STR) was used as digester. Power numbers obtained by the CFD simulation were compared with those from the experimental correlation. Results showed that the standard k-ε model was more appropriate than other turbulence models. A new index, net power production instead of gas production, was proposed to optimize feedstock ratio for anaerobic co-digestion. Results showed that flow field and power consumption were significantly changed in co-digestion of CM and CS compared with those in mono-digestion of either CM or CS. For different mixing modes, the optimum feedstock ratio for co-digestion changed with net power production. The best option of CM/CS ratio for continuous mixing, intermittent mixing I, and intermittent mixing II were 1:1, 1:1 and 1:3, respectively.

  19. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  20. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  1. Reliability improvements of the Guri Hydroelectric Power Plant computer control system AGC and AVC

    SciTech Connect

    Castro, F.; Pescina, M. ); Llort, G. )

    1992-09-01

    This paper describes the computer control system of a large hydroelectric powerplant and the reliability improvements made to the automatic generation control (AGC) and automatic voltage control (AVC) programs. hardware and software modifications were required to improve the interface between the powerplant and the regional load dispatch office. These modifications, and their impact on the AGC and AVC reliability, are also discussed. The changes that have been implemented are recommended for inclusion in new powerplant computer control systems, and as an upgrade feature for existing control systems.

  2. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  3. Control algorithms and computer simulation of a stand-alone photovoltaic village power system

    NASA Technical Reports Server (NTRS)

    Groumpos, P. P.; Culler, J. E.; Delombard, R.; Ratajczak, A. F.; Cull, R.

    1984-01-01

    At Stand-Alone Photovoltaic (SAPV) power systems increase in size and load diversity, the design and simulation of control subsystems takes on added importance. These SAPV systems represent 'mini utilities' with commensurate controls requirements, albeit with the added complexity of the energy source (sunlight received) being an uncontrollable variable. This paper briefly describes a stand-alone photovoltaic power/load system computerized simulation model. The model was tested against operational data from the Schuchuli stand-alone village photovoltaic system and has achieved acceptable levels of simulation accuracy. The model can be used to simulate system designs although with probable battery modification.

  4. Comparison of circular orbit and Fourier power series ephemeris representations for backup use by the upper atmosphere research satellite onboard computer

    NASA Technical Reports Server (NTRS)

    Kast, J. R.

    1988-01-01

    The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.

  5. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  6. Computational study of power conversion and luminous efficiency performance for semiconductor quantum dot nanophosphors on light-emitting diodes.

    PubMed

    Erdem, Talha; Nizamoglu, Sedat; Demir, Hilmi Volkan

    2012-01-30

    We present power conversion efficiency (PCE) and luminous efficiency (LE) performance levels of high photometric quality white LEDs integrated with quantum dots (QDs) achieving an averaged color rendering index of ≥90 (with R9 at least 70), a luminous efficacy of optical radiation of ≥380 lm/W(opt) a correlated color temperature of ≤4000 K, and a chromaticity difference dC <0.0054. We computationally find that the device LE levels of 100, 150, and 200 lm/W(elect) can be achieved with QD quantum efficiency of 43%, 61%, and 80% in film, respectively, using state-of-the-art blue LED chips (81.3% PCE). Furthermore, our computational analyses suggest that QD-LEDs can be both photometrically and electrically more efficient than phosphor based LEDs when state-of-the-art QDs are used.

  7. The Instruments of Power: A Computer-Assisted Game for the ACSC Curriculum

    DTIC Science & Technology

    2005-04-01

    Tolbert, Brian G. "Instruments of Power Game and Rules Development." Air Command and Staff College, 2005. Wang, Wallace. Visual Basic 6 for Dummies . New...Wang, Visual Basic 6 for Dummies (New York, NY: Wiley Publishing, 1998), 56-58. 56 Hasbro, Risk Rules (Pawtucket, RI: 1999). 57 Hasbro, Risk II Game

  8. Definitions of non-stationary vibration power for time-frequency analysis and computational algorithms based upon harmonic wavelet transform

    NASA Astrophysics Data System (ADS)

    Heo, YongHwa; Kim, Kwang-joon

    2015-02-01

    While the vibration power for a set of harmonic force and velocity signals is well defined and known, it is not as popular yet for a set of stationary random force and velocity processes, although it can be found in some literatures. In this paper, the definition of the vibration power for a set of non-stationary random force and velocity signals will be derived for the purpose of a time-frequency analysis based on the definitions of the vibration power for the harmonic and stationary random signals. The non-stationary vibration power, defined as the short-time average of the product of the force and velocity over a given frequency range of interest, can be calculated by three methods: the Wigner-Ville distribution, the short-time Fourier transform, and the harmonic wavelet transform. The latter method is selected in this paper because band-pass filtering can be done without phase distortions, and the frequency ranges can be chosen very flexibly for the time-frequency analysis. Three algorithms for the time-frequency analysis of the non-stationary vibration power using the harmonic wavelet transform are discussed. The first is an algorithm for computation according to the full definition, while the others are approximate. Noting that the force and velocity decomposed into frequency ranges of interest by the harmonic wavelet transform are constructed with coefficients and basis functions, for the second algorithm, it is suggested to prepare a table of time integrals of the product of the basis functions in advance, which are independent of the signals under analysis. How to prepare and utilize the integral table are presented. The third algorithm is based on an evolutionary spectrum. Applications of the algorithms to the time-frequency analysis of the vibration power transmitted from an excitation source to a receiver structure in a simple mechanical system consisting of a cantilever beam and a reaction wheel are presented for illustration.

  9. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  10. Computational assessment of the influence of the overlap ratio on the power characteristics of a Classical Savonius wind turbine

    NASA Astrophysics Data System (ADS)

    Kacprzak, Konrad; Sobczak, Krzysztof

    2015-09-01

    An influence of the overlap on the performance of the Classical Savonius wind turbine was investigated. Unsteady two-dimensional numerical simulations were carried out for a wide range of overlap ratios. For selected configurations computation quality was verified by comparison with three-dimensional simulations and the wind tunnel experimental data available in literature. A satisfactory agreement was achieved. Power characteristics were determined for all the investigated overlap ratios for selected tip speed ratios. Obtained results indicate that the maximum device performance is achieved for the buckets overlap ratio close to 0.

  11. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  12. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  13. WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation

    SciTech Connect

    Han, D; Williamson, J; Siebers, J

    2014-06-15

    Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water and water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.

  14. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  15. The analysis of diagnostics possibilities of the Dual- Drive electric power steering system using diagnostics scanner and computer method

    NASA Astrophysics Data System (ADS)

    Szczypiński-Sala, W.; Dobaj, K.

    2016-09-01

    The article presents the analysis of diagnostics possibilities of electric power steering system using computer diagnostics scanner. Several testing attempts were performed. There were analyzed the changes of torque moment exerted on steering wheel by the driver and the changes of the angle of rotation steering wheel accompanying them. The tests were conducted in variable conditions comprising wheel load and the friction coefficient of tyre road interaction. Obtained results enabled the analysis of the influence of changeable operations conditions, possible to acquire in diagnostics scanners of chosen parameters of electric power steering system. Moreover, simulation model of operation, electric drive power steering system with the use of the Matlab simulation software was created. The results of the measurements obtained in road conditions served to verify this model. Subsequently, model response to inputs change of the device was analyzed and its reaction to various constructional and exploitative parameters was checked. The entirety of conducted work constitutes a step to create a diagnostic monitor possible to use in self-diagnosis of electric power steering system.

  16. Systematic computation of nonlinear cellular and molecular dynamics with low-power CytoMimetic circuits: a simulation study.

    PubMed

    Papadimitriou, Konstantinos I; Stan, Guy-Bart V; Drakakis, Emmanuel M

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented.

  17. Dataset of calcified plaque condition in the stenotic coronary artery lesion obtained using multidetector computed tomography to indicate the addition of rotational atherectomy during percutaneous coronary intervention.

    PubMed

    Akutsu, Yasushi; Hamazaki, Yuji; Sekimoto, Teruo; Kaneko, Kyouichi; Kodama, Yusuke; Li, Hui-Ling; Suyama, Jumpei; Gokan, Takehiko; Sakai, Koshiro; Kosaki, Ryota; Yokota, Hiroyuki; Tsujita, Hiroaki; Tsukamoto, Shigeto; Sakurai, Masayuki; Sambe, Takehiko; Oguchi, Katsuji; Uchida, Naoki; Kobayashi, Shinichi; Aoki, Atsushi; Kobayashi, Youichi

    2016-06-01

    Our data shows the regional coronary artery calcium scores (lesion CAC) on multidetector computed tomography (MDCT) and the cross-section imaging on MDCT angiography (CTA) in the target lesion of the patients with stable angina pectoris who were scheduled for percutaneous coronary intervention (PCI). CAC and CTA data were measured using a 128-slice scanner (Somatom Definition AS+; Siemens Medical Solutions, Forchheim, Germany) before PCI. CAC was measured in a non-contrast-enhanced scan and was quantified using the Calcium Score module of SYNAPSE VINCENT software (Fujifilm Co. Tokyo, Japan) and expressed in Agatston units. CTA were then continued with a contrast-enhanced ECG gating to measure the severity of the calcified plaque condition. We present that both CAC and CTA data are used as a benchmark to consider the addition of rotational atherectomy during PCI to severely calcified plaque lesions.

  18. Dataset of calcified plaque condition in the stenotic coronary artery lesion obtained using multidetector computed tomography to indicate the addition of rotational atherectomy during percutaneous coronary intervention

    PubMed Central

    Akutsu, Yasushi; Hamazaki, Yuji; Sekimoto, Teruo; Kaneko, Kyouichi; Kodama, Yusuke; Li, Hui-Ling; Suyama, Jumpei; Gokan, Takehiko; Sakai, Koshiro; Kosaki, Ryota; Yokota, Hiroyuki; Tsujita, Hiroaki; Tsukamoto, Shigeto; Sakurai, Masayuki; Sambe, Takehiko; Oguchi, Katsuji; Uchida, Naoki; Kobayashi, Shinichi; Aoki, Atsushi; Kobayashi, Youichi

    2016-01-01

    Our data shows the regional coronary artery calcium scores (lesion CAC) on multidetector computed tomography (MDCT) and the cross-section imaging on MDCT angiography (CTA) in the target lesion of the patients with stable angina pectoris who were scheduled for percutaneous coronary intervention (PCI). CAC and CTA data were measured using a 128-slice scanner (Somatom Definition AS+; Siemens Medical Solutions, Forchheim, Germany) before PCI. CAC was measured in a non-contrast-enhanced scan and was quantified using the Calcium Score module of SYNAPSE VINCENT software (Fujifilm Co. Tokyo, Japan) and expressed in Agatston units. CTA were then continued with a contrast-enhanced ECG gating to measure the severity of the calcified plaque condition. We present that both CAC and CTA data are used as a benchmark to consider the addition of rotational atherectomy during PCI to severely calcified plaque lesions. PMID:26977441

  19. Synthesis of Bridged Heterocycles via Sequential 1,4- and 1,2-Addition Reactions to α,β-Unsaturated N-Acyliminium Ions: Mechanistic and Computational Studies.

    PubMed

    Yazici, Arife; Wille, Uta; Pyne, Stephen G

    2016-02-19

    Novel tricyclic bridged heterocyclic systems can be readily prepared from sequential 1,4- and 1,2-addition reactions of allyl and 3-substituted allylsilanes to indolizidine and quinolizidine α,β-unsaturated N-acyliminium ions. These reactions involve a novel N-assisted, transannular 1,5-hydride shift. Such a mechanism was supported by examining the reaction of a dideuterated indolizidine, α,β-unsaturated N-acyliminium ion precursor, which provided specifically dideuterated tricyclic bridged heterocyclic products, and from computational studies. In contrast, the corresponding pyrrolo[1,2-a]azepine system did not provide the corresponding tricyclic bridged heterocyclic product and gave only a bis-allyl adduct, while more substituted versions gave novel furo[3,2-d]pyrrolo[1,2-a]azepine products. Such heterocyclic systems would be expected to be useful scaffolds for the preparation of libraries of novel compounds for new drug discovery programs.

  20. A computational analysis of natural convection in a vertical channel with a modified power law non-Newtonian fluid

    SciTech Connect

    Lee, S.R.; Irvine, T.F. Jr.; Greene, G.A.

    1998-04-01

    An implicit finite difference method was applied to analyze laminar natural convection in a vertical channel with a modified power law fluid. This fluid model was chosen because it describes the viscous properties of a pseudoplastic fluid over the entire shear rate range likely to be found in natural convection flows since it covers the shear rate range from Newtonian through transition to simple power law behavior. In addition, a dimensionless similarity parameter is identified which specifies in which of the three regions a particular system is operating. The results for the average channel velocity and average Nusselt number in the asymptotic Newtonian and power law regions are compared with numerical data in the literature. Also, graphical results are presented for the velocity and temperature fields and entrance lengths. The results of average channel velocity and Nusselt number are given in the three regions including developing and fully developed flows. As an example, a pseudoplastic fluid (carboxymethyl cellulose) was chosen to compare the different results of average channel velocity and Nusselt number between a modified power law fluid and the conventional power law model. The results show, depending upon the operating conditions, that if the correct model is not used, gross errors can result.

  1. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    PubMed Central

    Han, Dong; Siebers, Jeffrey V.; Williamson, Jeffrey F.

    2016-01-01

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl2 aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type

  2. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    SciTech Connect

    Han, Dong Williamson, Jeffrey F.; Siebers, Jeffrey V.

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on

  3. Exploration and Evaluation of Nanometer Low-power Multi-core VLSI Computer Architectures

    DTIC Science & Technology

    2015-03-01

    reliable system that can be utilized for producing state-of-the- art computer architectures, especially for silicon implementations. The research...stitch elements together via placing each layout and routing wire between known pins. Early layout editors, such as the Magic Layout Editor, had...within the University of Berkeley mainly for a public domain VLSI tool called Magic [15]. The Tcl language is useful in that it has an easy-to- learn

  4. The Meaning and Computation of Causal Power: Comment on Cheng (1997) and Novick and Cheng (2004)

    ERIC Educational Resources Information Center

    Luhmann, Christian C.; Ahn, Woo-kyoung

    2005-01-01

    D. Hume (1739/1987) argued that causality is not observable. P. W. Cheng claimed to present "a theoretical solution to the problem of causal induction first posed by Hume more than two and a half centuries ago" (p. 398) in the form of the power PC theory (L. R. Novick & P. W. Cheng). This theory claims that people's goal in causal induction is to…

  5. Logarithmic divergences in the k-inflationary power spectra computed through the uniform approximation

    SciTech Connect

    Alinea, Allan L.; Kubota, Takahiro; Naylor, Wade E-mail: kubota@celas.osaka-u.ac.jp

    2016-02-01

    We investigate a calculation method for solving the Mukhanov-Sasaki equation in slow-roll k-inflation based on the uniform approximation (UA) in conjunction with an expansion scheme for slow-roll parameters with respect to the number of e-folds about the so-called turning point. Earlier works on this method have so far gained some promising results derived from the approximating expressions for the power spectra among others, up to second order with respect to the Hubble and sound flow parameters, when compared to other semi-analytical approaches (e.g., Green's function and WKB methods). However, a closer inspection is suggestive that there is a problem when higher-order parts of the power spectra are considered; residual logarithmic divergences may come out that can render the prediction physically inconsistent. Looking at this possibility, we map out up to what order with respect to the mentioned parameters several physical quantities can be calculated before hitting a logarithmically divergent result. It turns out that the power spectra are limited up to second order, the tensor-to-scalar ratio up to third order, and the spectral indices and running converge to all orders. This indicates that the expansion scheme is incompatible with the working equations derived from UA for the power spectra but compatible with that of the spectral indices. For those quantities that involve logarithmically divergent terms in the higher-order parts, existing results in the literature for the convergent lower-order parts calculated in the equivalent fashion should be viewed with some caution; they do not rest on solid mathematical ground.

  6. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  7. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  8. Computational prediction of tube erosion in coal fired power utility boilers

    SciTech Connect

    Lee, B.E.; Fletcher, C.A.J.; Behnia, M.

    1999-10-01

    Erosion of boiler tubes causes serious operational problems in many pulverized coal-fired utility boilers. A new erosion model has been developed in the present study for the prediction of boiler tube erosion. The Lagrangian approach is employed to predict the behavior of the particulate phase. The results of computational prediction of boiler tube erosion and the various parameters causing erosion are discussed in this paper. Comparison of the numerical predictions for a single tube erosion with experimental data shows very good agreement.

  9. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  10. Power series expansion of the roots of a secular equation containing symbolic elements: Computer algebra and Moseley's law

    NASA Astrophysics Data System (ADS)

    Barnett, Michael P.; Decker, Thomas; Krandick, Werner

    2001-06-01

    We use computer algebra to expand the Pekeris secular determinant for two-electron atoms symbolically, to produce an explicit polynomial in the energy parameter ɛ, with coefficients that are polynomials in the nuclear charge Z. Repeated differentiation of the polynomial, followed by a simple transformation, gives a series for ɛ in decreasing powers of Z. The leading term is linear, consistent with well-known behavior that corresponds to the approximate quadratic dependence of ionization potential on atomic number (Moseley's law). Evaluating the 12-term series for individual Z gives the roots to a precision of 10 or more digits for Z⩾2. This suggests the use of similar tactics to construct formulas for roots vs atomic, molecular, and variational parameters in other eigenvalue problems, in accordance with the general objectives of gradient theory. Matrix elements can be represented by symbols in the secular determinants, enabling the use of analytical expressions for the molecular integrals in the differentiation of the explicit polynomials. The mathematical and computational techniques include modular arithmetic to handle matrix and polynomial operations, and unrestricted precision arithmetic to overcome severe digital erosion. These are likely to find many further applications in computational chemistry.

  11. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    SciTech Connect

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  12. The power of virtual integration: an interview with Dell Computer's Michael Dell. Interview by Joan Magretta.

    PubMed

    Dell, M

    1998-01-01

    Michael Dell started his computer company in 1984 with a simple business insight. He could bypass the dealer channel through which personal computers were then being sold and sell directly to customers, building products to order. Dell's direct model eliminated the dealer's markup and the risks associated with carrying large inventories of finished goods. In this interview, Michael Dell provides a detailed description of how his company is pushing that business model one step further, toward what he calls virtual integration. Dell is using technology and information to blur the traditional boundaries in the value chain between suppliers, manufacturers, and customers. The individual pieces of Dell's strategy--customer focus, supplier partnerships, mass customization, just-in-time manufacturing--may be all be familiar. But Michael Dell's business insight into how to combine them is highly innovative. Direct relationships with customers create valuable information, which in turn allows the company to coordinate its entire value chain back through manufacturing to product design. Dell describes how his company has come to achieve this tight coordination without the "drag effect" of ownership. Dell reaps the advantages of being vertically integrated without incurring the costs, all the while achieving the focus, agility, and speed of a virtual organization. As envisioned by Michael Dell, virtual integration may well become a new organizational model for the information age.

  13. Computational modeling of a 60kw oxy-methane direct power extraction combustor

    NASA Astrophysics Data System (ADS)

    Vidana, Omar Daniel

    Efficient renewable energy harvesting and storage needed for overcoming present challenges related to clean environment and increasing energy demand. Different types of energy harvesting technologies such as solar, thermal and wind, require excess energy to be stored in batteries, which further adds to complication and increase in cost to the overall renewable energy system. Integrating the generation and storage of solar energy in a single device would be a more cost effective way towards meeting the increasing energy demand due to its simplicity in design and promising outcomes on space consumptions. Herein, a sustainable photovoltaic cell integrated with an energy storage device was developed that addresses short term photovoltaic (PV) power variability using dye sensitized solar cell (DSSC) in a tandem structure with thin film based lithium-ion battery. As lithium based batteries have been addressed as efficient charge storage system due to its high energy storage density and extended lifecycle performance. The integrated structure uses a common anode (titanium foil coated with anatase TiO2 on both sides which serves as DSSC and lithium ion battery anode) which showed open-circuit voltage of 3 V with short circuit current density of 40mAhg-1 and had a storage efficiency of 0.80% which can serve as power source to mobile storage applications.

  14. Piezoelectronics: a novel, high-performance, low-power computer switching technology

    NASA Astrophysics Data System (ADS)

    Newns, D. M.; Martyna, G. J.; Elmegreen, B. G.; Liu, X.-H.; Theis, T. N.; Trolier-McKinstry, S.

    2012-06-01

    Current switching speeds in CMOS technology have saturated since 2003 due to power constraints arising from the inability of line voltage to be further lowered in CMOS below about 1V. We are developing a novel switching technology based on piezoelectrically transducing the input or gate voltage into an acoustic wave which compresses a piezoresistive (PR) material forming the device channel. Under pressure the PR undergoes an insulator-to-metal transition which makes the channel conducting, turning on the device. A piezoelectric (PE) transducer material with a high piezoelectric coefficient, e.g. a domain-engineered relaxor piezoelectric, is needed to achieve low voltage operation. Suitable channel materials manifesting a pressure-induced metal-insulator transition can be found amongst rare earth chalcogenides, transition metal oxides, etc.. Mechanical requirements include a high PE/PR area ratio to step up pressure, a rigid surround material to constrain the PE and PR external boundaries normal to the strain axis, and a void space to enable free motion of the component side walls. Using static mechanical modeling and dynamic electroacoustic simulations, we optimize device structure and materials and predict performance. The device, termed a PiezoElectronic Transistor (PET) can be used to build complete logic circuits including inverters, flip-flops, and gates. This "Piezotronic" logic is predicted to have a combination of low power and high speed operation.

  15. New approach for precise computation of Lyman-α forest power spectrum with hydrodynamical simulations

    SciTech Connect

    Borde, Arnaud; Palanque-Delabrouille, Nathalie; Rossi, Graziano; Yèche, Christophe; LeGoff, Jean-Marc; Rich, Jim; Bolton, James S. E-mail: nathalie.palanque-delabrouille@cea.fr E-mail: matteoviel@gmail.com E-mail: christophe.yeche@cea.fr E-mail: james.rich@cea.fr

    2014-07-01

    Current experiments are providing measurements of the flux power spectrum from the Lyman-α forests observed in quasar spectra with unprecedented accuracy. Their interpretation in terms of cosmological constraints requires specific simulations of at least equivalent precision. In this paper, we present a suite of cosmological N-body simulations with cold dark matter and baryons, specifically aiming at modeling the low-density regions of the inter-galactic medium as probed by the Lyman-α forests at high redshift. The simulations were run using the GADGET-3 code and were designed to match the requirements imposed by the quality of the current SDSS-III/BOSS or forthcoming SDSS-IV/eBOSS data. They are made using either 2 × 768{sup 3} ≅ 1 billion or 2 × 192{sup 3} ≅ 14 million particles, spanning volumes ranging from (25 Mpc h{sup −1}){sup 3} for high-resolution simulations to (100 Mpc h{sup −1}){sup 3} for large-volume ones. Using a splicing technique, the resolution is further enhanced to reach the equivalent of simulations with 2 × 3072{sup 3} ≅ 58 billion particles in a (100 Mpc h{sup −1}){sup 3} box size, i.e. a mean mass per gas particle of 1.2 × 10{sup 5}M{sub ⊙} h{sup −1}. We show that the resulting power spectrum is accurate at the 2% level over the full range from a few Mpc to several tens of Mpc. We explore the effect on the one-dimensional transmitted-flux power spectrum of four cosmological parameters (n{sub s}, σ{sub 8}, Ω{sub m} and H{sub 0}) and two astrophysical parameters (T{sub 0} and γ) that are related to the heating rate of the intergalactic medium. By varying the input parameters around a central model chosen to be in agreement with the latest Planck results, we built a grid of simulations that allows the study of the impact on the flux power spectrum of these six relevant parameters. We improve upon previous studies by not only measuring the effect of each parameter individually, but also probing the impact of the

  16. Electronic stopping power calculation for water under the Lindhard formalism for application in proton computed tomography

    NASA Astrophysics Data System (ADS)

    Guerrero, A. F.; Mesa, J.

    2016-07-01

    Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.

  17. Feasibility of an ultra-low power digital signal processor platform as a basis for a fully implantable brain-computer interface system.

    PubMed

    Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.

  18. Experimental and Computational Fluid Dynamic Analysis of Axial-Flow Hydrodynamic Power Turbine

    DTIC Science & Technology

    2013-03-01

    reduced or increased by adding or removing resistors into the output circuit of the turbine. By placing 10 ohm resistors in either series or...of the moving carriage over the rails. These are still the predominant forces in the AFHT’s overall drag analysis, though additional electromotive ... electromotive forces acting against the rotation of the rotor. At lower electric resistances, more torque was needed to keep the generator turning at the

  19. A small, portable, battery-powered brain-computer interface system for motor rehabilitation.

    PubMed

    McCrimmon, Colin M; Ming Wang; Silva Lopes, Lucas; Wang, Po T; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    Motor rehabilitation using brain-computer interface (BCI) systems may facilitate functional recovery in individuals after stroke or spinal cord injury. Nevertheless, these systems are typically ill-suited for widespread adoption due to their size, cost, and complexity. In this paper, a small, portable, and extremely cost-efficient (<;$200) BCI system has been developed using a custom electroencephalographic (EEG) amplifier array, and a commercial microcontroller and touchscreen. The system's performance was tested using a movement-related BCI task in 3 able-bodied subjects with minimal previous BCI experience. Specifically, subjects were instructed to alternate between relaxing and dorsiflexing their right foot, while their EEG was acquired and analyzed in real-time by the BCI system to decode their underlying movement state. The EEG signals acquired by the custom amplifier array were similar to those acquired by a commercial amplifier (maximum correlation coefficient ρ=0.85). During real-time BCI operation, the average correlation between instructional cues and decoded BCI states across all subjects (ρ=0.70) was comparable to that of full-size BCI systems. Small, portable, and inexpensive BCI systems such as the one reported here may promote a widespread adoption of BCI-based movement rehabilitation devices in stroke and spinal cord injury populations.

  20. Computational and experimental study of airflow around a fan powered UVGI lamp

    NASA Astrophysics Data System (ADS)

    Kaligotla, Srikar; Tavakoli, Behtash; Glauser, Mark; Ahmadi, Goodarz

    2011-11-01

    The quality of indoor air environment is very important for improving the health of occupants and reducing personal exposure to hazardous pollutants. An effective way of controlling air quality is by eliminating the airborne bacteria and viruses or by reducing their emissions. Ultraviolet Germicidal Irradiation (UVGI) lamps can effectively reduce these bio-contaminants in an indoor environment, but the efficiency of these systems depends on airflow in and around the device. UVGI lamps would not be as effective in stagnant environments as they would be when the moving air brings the bio-contaminant in their irradiation region. Introducing a fan into the UVGI system would augment the efficiency of the system's kill rate. Airflows in ventilated spaces are quite complex due to the vast range of length and velocity scales. The purpose of this research is to study these complex airflows using CFD techniques and validate computational model with airflow measurements around the device using Particle Image Velocimetry measurements. The experimental results including mean velocities, length scales and RMS values of fluctuating velocities are used in the CFD validation. Comparison of these data at different locations around the device with the CFD model predictions are performed and good agreement was observed.

  1. Computed tomography: a powerful imaging technique in the fields of dimensional metrology and quality control

    NASA Astrophysics Data System (ADS)

    Probst, Gabriel; Boeckmans, Bart; Dewulf, Wim; Kruth, Jean-Pierre

    2016-05-01

    X-ray computed tomography (CT) is slowly conquering its space in the manufacturing industry for dimensional metrology and quality control purposes. The main advantage is its non-invasive and non-destructive character. Currently, CT is the only measurement technique that allows full 3D visualization of both inner and outer features of an object through a contactless probing system. Using hundreds of radiographs, acquired while rotating the object, a 3D representation is generated and dimensions can be verified. In this research, this non-contact technique was used for the inspection of assembled components. A dental cast model with 8 implants, connected by a screwed retained bar made of titanium. The retained bar includes a mating interface connection that should ensure a perfect fitting without residual stresses when the connection is fixed with screws. CT was used to inspect the mating interfaces between these two components. Gaps at the connections can lead to bacterial growth and potential inconvenience for the patient who would have to face a new surgery to replace his/hers prosthesis. With the aid of CT, flaws in the design or manufacturing process that could lead to gaps at the connections could be assessed.

  2. Computational Work to Support FAP/SRW Variable-Speed Power-Turbine Development

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to document the work done to enable a NASA CFD code to model the transition on a blade. The purpose of the present work is to down-select a transition model that would allow the flow simulation of a Variable-Speed Power-Turbine (VSPT) to be accurately performed. The modeling is to be ultimately performed to also account for the blade row interactions and effect on transition and therefore accurate accounting for losses. The present work is limited to steady flows. The low Reynolds number k-omega model of Wilcox and a modified version of same will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k-omega model and its modified variant fail to simulate the transition with any degree of accuracy. A case is therefore made for more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored and the Walters and Leylek model which was thought to be in a more mature state of development is introduced and implemented in the Glenn-HT code. Two-dimensional flat plate results and three-dimensional results for flow over turbine blades and the resulting heat transfer and its transitional behavior are reported. It is shown that the transition simulation is much improved over the baseline k-omega model.

  3. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  4. A computational study of the addition of ReO3L (L = Cl(-), CH3, OCH3 and Cp) to ethenone.

    PubMed

    Aniagyei, Albert; Tia, Richard; Adei, Evans

    2016-01-01

    The periselectivity and chemoselectivity of the addition of transition metal oxides of the type ReO3L (L = Cl, CH3, OCH3 and Cp) to ethenone have been explored at the MO6 and B3LYP/LACVP* levels of theory. The activation barriers and reaction energies for the stepwise and concerted addition pathways involving multiple spin states have been computed. In the reaction of ReO3L (L = Cl(-), OCH3, CH3 and Cp) with ethenone, the concerted [2 + 2] addition of the metal oxide across the C=C and C=O double bond to form either metalla-2-oxetane-3-one or metalla-2,4-dioxolane is the most kinetically favored over the formation of metalla-2,5-dioxolane-3-one from the direct [3 + 2] addition pathway. The trends in activation and reaction energies for the formation of metalla-2-oxetane-3-one and metalla-2,4-dioxolane are Cp < Cl(-) < OCH3 < CH3 and Cp < OCH3 < CH3 < Cl(-) and for the reaction energies are Cp < OCH3 < Cl(-) < CH3 and Cp < CH3 < OCH3 < Cl CH3. The concerted [3 + 2] addition of the metal oxide across the C=C double of the ethenone to form species metalla-2,5-dioxolane-3-one is thermodynamically the most favored for the ligand L = Cp. The direct [2 + 2] addition pathways leading to the formations of metalla-2-oxetane-3-one and metalla-2,4-dioxolane is thermodynamically the most favored for the ligands L = OCH3 and Cl(-). The difference between the calculated [2 + 2] activation barriers for the addition of the metal oxide LReO3 across the C=C and C=O functionalities of ethenone are small except for the case of L = Cl(-) and OCH3. The rearrangement of the metalla-2-oxetane-3-one-metalla-2,5-dioxolane-3-one even though feasible, are unfavorable due to high activation energies of their rate-determining steps. For the rearrangement of the metalla-2-oxetane-3-one to metalla-2,5-dioxolane-3-one, the trends in activation barriers is found to follow the order OCH3 < Cl(-) < CH3 < Cp. The trends in the activation energies for

  5. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  6. Determination of Zinc-Based Additives in Lubricating Oils by Flow-Injection Analysis with Flame-AAS Detection Exploiting Injection with a Computer-Controlled Syringe.

    PubMed

    Pignalosa, Gustavo; Knochen, Moisés; Cabrera, Noel

    2005-01-01

    A flow-injection system is proposed for the determination of metal-based additives in lubricating oils. The system, operating under computer control uses a motorised syringe for measuring and injecting the oil sample (200 muL) in a kerosene stream, where it is dispersed by means of a packed mixing reactor and carried to an atomic absorption spectrometer which is used as detector. Zinc was used as model analyte. Two different systems were evaluated, one for low concentrations (range 0-10 ppm) and the second capable of providing higher dilution rates for high concentrations (range 0.02%-0.2% w/w). The sampling frequency was about 30 samples/h. Calibration curves fitted a second-degree regression model (r(2) = 0.996). Commercial samples with high and low zinc levels were analysed by the proposed method and the results were compared with those obtained with the standard ASTM method. The t test for mean values showed no significant differences at the 95% confidence level. Precision (RSD%) was better than 5% (2% typical) for the high concentrations system. The carryover between successive injections was found to be negligible.

  7. Determination of Zinc-Based Additives in Lubricating Oils by Flow-Injection Analysis with Flame-AAS Detection Exploiting Injection with a Computer-Controlled Syringe

    PubMed Central

    Pignalosa, Gustavo; Cabrera, Noel

    2005-01-01

    A flow-injection system is proposed for the determination of metal-based additives in lubricating oils. The system, operating under computer control uses a motorised syringe for measuring and injecting the oil sample (200 μL) in a kerosene stream, where it is dispersed by means of a packed mixing reactor and carried to an atomic absorption spectrometer which is used as detector. Zinc was used as model analyte. Two different systems were evaluated, one for low concentrations (range 0–10 ppm) and the second capable of providing higher dilution rates for high concentrations (range 0.02%–0.2% w/w). The sampling frequency was about 30 samples/h. Calibration curves fitted a second-degree regression model (r 2 = 0.996). Commercial samples with high and low zinc levels were analysed by the proposed method and the results were compared with those obtained with the standard ASTM method. The t test for mean values showed no significant differences at the 95% confidence level. Precision (RSD%) was better than 5% (2% typical) for the high concentrations system. The carryover between successive injections was found to be negligible. PMID:18924720

  8. Application of computational chemistry methods to the prediction of chirality and helical twisting power in liquid crystal systems

    NASA Astrophysics Data System (ADS)

    Noto, Anthony G.; Marshall, Kenneth L.

    2005-08-01

    Until recently, it has not been possible to determine, with any real certainty, a complete picture of "chirality" (absolute configuration, optical rotation direction, and helical twisting power) for new chiral compounds without first synthesizing, purifying, characterizing, and testing every new material. Recent advances in computational chemistry now allow the prediction of certain key chiral molecular properties prior to synthesis, which opens the possibility of predetermining the "chiroptical" properties of new liquid crystal dopants and mixtures for advanced optical and photonics applications. A key element to this activity was the development of both the chirality index (G0) by Osipov et al., and the scaled chirality index (G0S) by Solymosi et al., that can be used as a "figure of merit" for molecular chirality. Promising correlations between G0S and both circular dichroism (CD) and the helical twisting power (HTP) of a chiral dopant in a liquid crystal host have been shown by Neal et al., Osipov, and Kuball. Our work improves the predictive capabilities of G0S by taking into account the actual mass of each atom in the molecule in the calculations; in previous studies the mass of each atom was assumed to be equal. This "weighted" scaled chirality index (G0SW) was calculated and correlated to existing experimental HTP data for each member of a series of existing, well-known chiral compounds. The computed HTP using G0SW for these model systems correlated to the experimental data with remarkable accuracy. Weighted, scaled chiral indices were also calculated for the first time for a series of novel chiral transition metal dithiolene dyes for near-IR liquid crystal device applications.

  9. GPU computing for systems biology.

    PubMed

    Dematté, Lorenzo; Prandi, Davide

    2010-05-01

    The development of detailed, coherent, models of complex biological systems is recognized as a key requirement for integrating the increasing amount of experimental data. In addition, in-silico simulation of bio-chemical models provides an easy way to test different experimental conditions, helping in the discovery of the dynamics that regulate biological systems. However, the computational power required by these simulations often exceeds that available on common desktop computers and thus expensive high performance computing solutions are required. An emerging alternative is represented by general-purpose scientific computing on graphics processing units (GPGPU), which offers the power of a small computer cluster at a cost of approximately $400. Computing with a GPU requires the development of specific algorithms, since the programming paradigm substantially differs from traditional CPU-based computing. In this paper, we review some recent efforts in exploiting the processing power of GPUs for the simulation of biological systems.

  10. PDF computations for power-in-the-bucket measurements of an IR laser beam propagating in the maritime environment

    NASA Astrophysics Data System (ADS)

    Nelson, C.; Avramov-Zamurovic, S.; Malek-Madani, R.; Korotkova, O.; Sova, R.; Davidson, F.

    2011-06-01

    During two separate field tests (July and September 2009) the performance of a free-space optical (FSO) communications link was evaluated in the maritime environment off of the mid-Atlantic coast near Wallops Island, VA. During these two field tests, a bi-directional shore-to-ship data link was established using commercially available adaptive optics terminals. The link, which ranged from 2 - 22 km (optical horizon), was established between a lookout tower located on Cedar Island, VA and a Johns Hopkins University Applied Physics Laboratory research vessel. This paper presents statistical analysis of the power-in-the-bucket captured from two detectors placed alongside the adaptive optics terminal during the September 2009 field trial. The detectors ranged in size from 0.25" to 1.0" in diameter. We will present the histogram reconstruction and compare the data for the 0.25" and 1.0" power-in-bucket (PIB), and 1.0" power-in-fiber (PIF) Adaptive Optics (AO) detectors with analytical probability density function (PDF) models based on the Lognormal, Gamma-Laguerre, and Gamma-Gamma distributions. Additionally, dependence of the results on propagation distance, detector aperture size, and varying levels of optical turbulence are investigated.

  11. QXP: powerful, rapid computer algorithms for structure-based drug design.

    PubMed

    McMartin, C; Bohacek, R S

    1997-07-01

    New methods for docking, template fitting and building pseudo-receptors are described. Full conformational searches are carried out for flexible cyclic and acyclic molecules. QXP (quick explore) search algorithms are derived from the method of Monte Carlo perturbation with energy minimization in Cartesian space. An additional fast search step is introduced between the initial perturbation and energy minimization. The fast search produces approximate low-energy structures, which are likely to minimize to a low energy. For template fitting, QXP uses a superposition force field which automatically assigns short-range attractive forces to similar atoms in different molecules. The docking algorithms were evaluated using X-ray data for 12 protein-ligand complexes. The ligands had up to 24 rotatable bonds and ranged from highly polar to mostly nonpolar. Docking searches of the randomly disordered ligands gave rms differences between the lowest energy docked structure and the energy-minimized X-ray structure, of less than 0.76 A for 10 of the ligands. For all the ligands, the rms difference between the energy-minimized X-ray structure and the closest docked structure was less than 0.4 A, when parts of one of the molecules which are in the solvent were excluded from the rms calculation. Template fitting was tested using four ACE inhibitors. Three ACE templates have been previously published. A single run using QXP generated a series of templates which contained examples of each of the three. A pseudo-receptor, complementary to an ACE template, was built out of small molecules, such as pyrrole, cyclopentanone and propane. When individually energy minimized in the pseudo-receptor, each of the four ACE inhibitors moved with an rms of less than 0.25 A. After random perturbation, the inhibitors were docked into the pseudo-receptor. Each lowest energy docked structure matched the energy-minimized geometry with an rms of less than 0.08 A. Thus, the pseudo-receptor shows steric and

  12. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  13. Light Water Reactor Sustainability Program: Computer-Based Procedures for Field Activities: Results from Three Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    2014-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. Nearly all activities in the nuclear power industry are guided by procedures, which today are printed and executed on paper. This paper-based procedure process has proven to ensure safety; however, there are improvements to be gained. Due to its inherent dynamic nature, a CBP provides the opportunity to incorporate context driven job aids, such as drawings, photos, and just-in-time training. Compared to the static state of paper-based procedures (PBPs), the presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps.

  14. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  15. Durability patch and damage dosimeter: a portable battery-powered data acquisition computer and durability patch design process

    NASA Astrophysics Data System (ADS)

    Haugse, Eric D.; Johnson, Patrick E.; Smith, David L.; Rogers, Lynn C.

    2000-05-01

    Repairs of secondary structure can be accomplished by restoring structural integrity at the damaged area and increasing the structure's damping in the repair region. Increased damping leads to a reduction in resonant response and a repair that will survive for the life of the aircraft. In order to design a repair with effective damping properties, the in-service structural strains and temperatures must be known. A rugged, small and lightweight data acquisition unit called the Damage Dosimeter has been developed to accomplish this task with minimal impact to the aircraft system. Running autonomously off of battery power, the Damage Dosimeter measures three channels of strain at sample rates as high as 15 kilo-samples per second and a single channel of temperature. It merges the functionality of both analog signal conditioning and a digital single board computer on one 3.5 by 5 inch card. The Damage Dosimeter allows an engineer to easily instrument an in-service aircraft to assess the structural response characteristics necessary to properly select damping materials. This information in conjunction with analysis and design procedures can be used to design a repair with optimum effectiveness. This paper will present the motivation behind the development of the Damage Dosimeter along with an overview of its functional capabilities and design. In-service flight data and analysis results will be discussed for two applications. The paper will also describe how the Damage Dosimeter is used to enable the Durability Patch design process.

  16. Isothiourea-catalysed enantioselective pyrrolizine synthesis: synthetic and computational studies† †Electronic supplementary information (ESI) available: NMR spectra, HPLC analysis and computational co-ordinates. Data available.12 CCDC 1483759. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c6ob01557c Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Stark, Daniel G.; Williamson, Patrick; Gayner, Emma R.; Musolino, Stefania F.; Kerr, Ryan W. F.; Taylor, James E.; Slawin, Alexandra M. Z.; O'Riordan, Timothy J. C.

    2016-01-01

    The catalytic enantioselective synthesis of a range of cis-pyrrolizine carboxylate derivatives with outstanding stereocontrol (14 examples, >95 : 5 dr, >98 : 2 er) through an isothiourea-catalyzed intramolecular Michael addition-lactonisation and ring-opening approach from the corresponding enone acid is reported. An optimised and straightforward three-step synthetic route to the enone acid starting materials from readily available pyrrole-2-carboxaldehydes is delineated, with benzotetramisole (5 mol%) proving the optimal catalyst for the enantioselective process. Ring-opening of the pyrrolizine dihydropyranone products with either MeOH or a range of amines leads to the desired products in excellent yield and enantioselectivity. Computation has been used to probe the factors leading to high stereocontrol, with the formation of the observed cis-steroisomer predicted to be kinetically and thermodynamically favoured. PMID:27489030

  17. Computational mechanics

    SciTech Connect

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  18. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Articles Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    It is important to realize that some test-articles may have significant sound absorption that may challenge the acoustic power capabilities of a test facility. Therefore, to mitigate this risk of not being able to meet the customers target spectrum, it is prudent to demonstrate early-on an increased acoustic power capability which compensates for this test-article absorption. This paper describes a concise method to reduce this risk when testing aerospace test-articles which have significant absorption. This method was successfully applied during the SpaceX Falcon 9 Payload Fairing acoustic test program at the NASA Glenn Research Center Plum Brook Stations RATF.

  19. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    SciTech Connect

    Gering, Kevin L.

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity at characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

  20. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  1. A Hierarchical Examination of the Immigrant Achievement Gap: The Additional Explanatory Power of Nationality and Educational Selectivity over Traditional Explorations of Race and Socioeconomic Status

    ERIC Educational Resources Information Center

    Simms, Kathryn

    2012-01-01

    This study compared immigrant and nonimmigrant educational achievement (i.e., the immigrant gap) in math by reexamining the explanatory power of race and socioeconomic status (SES)--two variables, perhaps, most commonly considered in educational research. Four research questions were explored through growth curve modeling, factor analysis, and…

  2. Product development: using a 3D computer model to optimize the stability of the Rocket powered wheelchair.

    PubMed

    Pinkney, S; Fernie, G

    2001-01-01

    A three-dimensional (3D) lumped-parameter model of a powered wheelchair was created to aid the development of the Rocket prototype wheelchair and to help explore the effect of innovative design features on its stability. The model was developed using simulation software, specifically Working Model 3D. The accuracy of the model was determined by comparing both its static stability angles and dynamic behavior as it passed down a 4.8-cm (1.9") road curb at a heading of 45 degrees with the performance of the actual wheelchair. The model's predictions of the static stability angles in the forward, rearward, and lateral directions were within 9.3, 7.1, and 3.8% of the measured values, respectively. The average absolute error in the predicted position of the wheelchair as it moved down the curb was 2.2 cm/m (0.9" per 3'3") traveled. The accuracy was limited by the inability to model soft bodies, the inherent difficulties in modeling a statically indeterminate system, and the computing time. Nevertheless, it was found to be useful in investigating the effect of eight design alterations on the lateral stability of the wheelchair. Stability was quantified by determining the static lateral stability angles and the maximum height of a road curb over which the wheelchair could successfully drive on a diagonal heading. The model predicted that the stability was more dependent on the configuration of the suspension system than on the dimensions and weight distribution of the wheelchair. Furthermore, for the situations and design alterations studied, predicted improvements in static stability were not correlated with improvements in dynamic stability.

  3. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption during Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customers aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facilitys available acoustic power capability becomes maximized with the test-article installed during the actual test then the customers environment requirement may become compromised. In order to understand the risk of not achieving the customers in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Stations Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  4. An Evaluation of the Additional Acoustic Power Needed to Overcome the Effects of a Test-Article's Absorption During Reverberant Chamber Acoustic Testing of Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.

    2014-01-01

    The exposure of a customer's aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facility's available acoustic power capability becomes maximized with the test-article installed during the actual test then the customer's environment requirement may become compromised. In order to understand the risk of not achieving the customer's in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Station's Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.

  5. Computational studies on the interactions among redox couples, additives and TiO2: implications for dye-sensitized solar cells.

    PubMed

    Asaduzzaman, Abu Md; Schreckenbach, Georg

    2010-11-21

    One of the major and unique components of dye-sensitized solar cells (DSSC) is the iodide/triiodide redox couple. Periodic density-functional calculations have been carried out to study the interactions among three different components of the DSSC, i.e. the redox shuttle, the TiO(2) semiconductor surface, and nitrogen containing additives, with a focus on the implications for the performance of the DSSC. Iodide and bromide with alkali metal cations as counter ions are strongly adsorbed on the TiO(2) surface. Small additive molecules also strongly interact with TiO(2). Both interactions induce a negative shift of the Fermi energy of TiO(2). The negative shift of the Fermi energy is related to the performance of the cell by increasing the open voltage of the cell and retarding the injection dynamics (decreasing the short circuit current). Additive molecules, however, have relatively weaker interaction with iodide and triiodide.

  6. Additional considerations and recommendations for the quantification of hand-grip strength in the measurement of leg power during high-intensity cycle ergometry.

    PubMed

    Baker, Julien Steven; Davies, Bruce

    2009-01-01

    The purpose of this study was to further examine the influence of hand-grip strength on power profiles and blood lactate values during high-intensity cycle ergometry. Fifteen male subjects each completed a 20-second cycle ergometer test twice, in a random manner, using two protocols, with a hand grip (WG), and without hand grip (WOHG). Hand-grip strength was quantified prior to exercise using a hand-grip dynamometer. Capillary (earlobe) blood was collected at rest, immediately following exercise, and 5 minutes postexercise. In the WG protocol, mean (+/-SD) blood lactate concentrations were 1.11 +/- 0.7 mmol.l( -1), 3.68 +/- 1.2 mmol.l( -1), and 8.14 +/- 1.3 mmol.l( -1), respectively. During the WOHG protocol, blood lactate values recorded were 0.99 +/- 0.9 mmol.l( -1), 3.68 +/- 1.1 mmol.l( -1), and 6.62 +/- 0.9 mmol.l( -1), respectively. Differences in lactate concentrations were found (P < 0.05) from rest to 5 minutes postexercise for both groups. Differences in concentrations also were observed between groups at the 5-minutes postexercise stage. Peak power output and fatigue index values also were greater using the WG protocol (792 +/- 73 W vs. 624 +/- 66 W; 38 +/- 6 vs. 24 +/- 8 W respectively; P< 0.05). No differences were recorded for mean power output (MPO) or work done (WD) between experimental conditions. These findings suggest that the performance of traditional style leg cycle ergometry is influenced by a muscular contribution from the upper body and by upper body strength.

  7. Effect of organic additives on the mitigation of volatility of 1-nitro-3,3'-dinitroazetidine (TNAZ): next generation powerful melt cast able high energy material.

    PubMed

    Talawar, M B; Singh, Alok; Naik, N H; Polke, B G; Gore, G M; Asthana, S N; Gandhe, B R

    2006-06-30

    1-Nitro-3,3'-dinitroazetidine (TNAZ) was synthesized based on the lines of reported method. Thermolysis studies on synthesized and characterized TNAZ using differential scanning calorimetry (DSC) and hyphenated TG-FT-IR techniques were undertaken to generate data on decomposition pattern. FT-IR of decomposition products of TNAZ revealed the evolution of oxides of nitrogen and HCN containing species suggesting the cleavage of C/N-NO(2) bond accompanied with the collapse of ring structure. The effect of incorporation of 15% additives namely, 3-amino-1,2,4-triazole (AT), 3,5-diamino-1,2,4-triazole (DAT), carbohydrazide (CHZ), 5,7-dinitrobenzofuroxan (DNBF), bis (2,2-dinitropropyl) succinate (BNPS), triaminoguanidinium nitrate (TAGN), 2,4,6-trinitrobenzoic acid (TNBA) and nitroguanidine (NQ) on the volatility of TNAZ was investigated by undertaking thermogravimetric analysis. The TG pattern brings out the potential of BNPS and TAGN as additives to mitigate the volatility of TNAZ. The influence of additives on thermal decomposition of pattern of TNAZ was also investigated by DSC. The DSC results indicated that the additives did not have appreciable effect on the melting point of TNAZ. Scanning electron microscopic (SEM) studies were carried out to investigate the effect of additives on morphology of TNAZ. This paper also discusses the possible mechanism involved in between the TNAZ and TAGN and BNPS. It appears that the formation of charge transfer complex formation between the TNAZ and TAGN/BNPS. The effect of addition of high explosives such as CL-20, HMX and RDX on thermo-physical characteristics of TNAZ is also reported in this paper.

  8. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  9. Characterization of pulmonary nodules on computer tomography (CT) scans: the effect of additive white noise on features selection and classification performance

    NASA Astrophysics Data System (ADS)

    Osicka, Teresa; Freedman, Matthew T.; Ahmed, Farid

    2007-03-01

    The goal of this project is to use computer analysis to classify small lung nodules, identified on CT, into likely benign and likely malignant categories. We compared discrete wavelet transforms (DWT) based features and a modification of classical features used and reported by others. To determine the best combination of features for classification, several intensities of white noise were added to the original images to determine the effect of such noise on classification accuracy. Two different approaches were used to determine the effect of noise: in the first method the best features for classification of nodules on the original image were retained as noise was added. In the second approach, we recalculated the results to reselect the best classification features for each particular level of added noise. The CT images are from the National Lung Screening Trial (NLST) of the National Cancer Institute (NCI). For this study, nodules were extracted in window frames of three sizes. Malignant nodules were cytologically or histogically diagnosed, while benign had two-year follow-up. A linear discriminant analysis with Fisher criterion (FLDA) approach was used for feature selection and classification, and decision matrix for matched sample to compare the classification accuracy. The initial features mode revealed sensitivity to both the amount of noise and the size of window frame. The recalculated feature mode proved more robust to noise with no change in terms of classification accuracy. This indicates that the best features for computer classification of lung nodules will differ with noise, and, therefore, with exposure.

  10. Comprehensive cardiac assessment with multislice computed tomography: evaluation of left ventricular function and perfusion in addition to coronary anatomy in patients with previous myocardial infarction

    PubMed Central

    Henneman, M M; Schuijf, J D; Jukema, J W; Lamb, H J; de Roos, A; Dibbets, P; Stokkel, M P; van der Wall, E E; Bax, J J

    2006-01-01

    Objective To evaluate a comprehensive multislice computed tomography (MSCT) protocol in patients with previous infarction, including assessment of coronary artery stenoses, left ventricular (LV) function and perfusion. Patients and methods 16‐slice MSCT was performed in 21 patients with previous infarction; from the MSCT data, coronary artery stenoses, (regional and global) LV function and perfusion were assessed. Invasive coronary angiography and gated single‐photon emission computed tomography (SPECT) served as the reference standards for coronary artery stenoses and LV function/perfusion, respectively. Results 236 of 241 (98%) coronary artery segments were interpretable on MSCT. The sensitivity and specificity for detection of stenoses were 91% and 97%. Pearson's correlation showed excellent agreement for assessment of LV ejection fraction between MSCT and SPECT (49 (13)% v 53 (12)%, respectively, r  =  0.85). Agreement for assessment of regional wall motion was excellent (92%, κ  =  0.77). In 68 of 73 (93%) segments, MSCT correctly identified a perfusion defect as compared with SPECT, whereas the absence of perfusion defects was correctly detected in 277 of 284 (98%) segments. Conclusions MSCT permits accurate, non‐invasive assessment of coronary artery stenoses, LV function and perfusion in patients with previous infarction. All parameters can be assessed from a single dataset. PMID:16740917

  11. Food additives

    MedlinePlus

    ... or natural. Natural food additives include: Herbs or spices to add flavor to foods Vinegar for pickling ... Certain colors improve the appearance of foods. Many spices, as well as natural and man-made flavors, ...

  12. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom

    SciTech Connect

    Moyers, M. F.

    2014-06-15

    Purpose: Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. Methods: A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. Results: For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. Conclusions: The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote

  13. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  14. Computer modelling integrated with micro-CT and material testing provides additional insight to evaluate bone treatments: Application to a beta-glycan derived whey protein mice model.

    PubMed

    Sreenivasan, D; Tu, P T; Dickinson, M; Watson, M; Blais, A; Das, R; Cornish, J; Fernandez, J

    2016-01-01

    The primary aim of this study was to evaluate the influence of a whey protein diet on computationally predicted mechanical strength of murine bones in both trabecular and cortical regions of the femur. There was no significant influence on mechanical strength in cortical bone observed with increasing whey protein treatment, consistent with cortical tissue mineral density (TMD) and bone volume changes observed. Trabecular bone showed a significant decline in strength with increasing whey protein treatment when nanoindentation derived Young׳s moduli were used in the model. When microindentation, micro-CT phantom density or normalised Young׳s moduli were included in the model a non-significant decline in strength was exhibited. These results for trabecular bone were consistent with both trabecular bone mineral density (BMD) and micro-CT indices obtained independently. The secondary aim of this study was to characterise the influence of different sources of Young׳s moduli on computational prediction. This study aimed to quantify the predicted mechanical strength in 3D from these sources and evaluate if trends and conclusions remained consistent. For cortical bone, predicted mechanical strength behaviour was consistent across all sources of Young׳s moduli. There was no difference in treatment trend observed when Young׳s moduli were normalised. In contrast, trabecular strength due to whey protein treatment significantly reduced when material properties from nanoindentation were introduced. Other material property sources were not significant but emphasised the strength trend over normalised material properties. This shows strength at the trabecular level was attributed to both changes in bone architecture and material properties.

  15. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  16. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  17. Do We Really Need Additional Contrast-Enhanced Abdominal Computed Tomography for Differential Diagnosis in Triage of Middle-Aged Subjects With Suspected Biliary Pain

    PubMed Central

    Hwang, In Kyeom; Lee, Yoon Suk; Kim, Jaihwan; Lee, Yoon Jin; Park, Ji Hoon; Hwang, Jin-Hyeok

    2015-01-01

    Abstract Enhanced computed tomography (CT) is widely used for evaluating acute biliary pain in the emergency department (ED). However, concern about radiation exposure from CT has also increased. We investigated the usefulness of pre-contrast CT for differential diagnosis in middle-aged subjects with suspected biliary pain. A total of 183 subjects, who visited the ED for suspected biliary pain from January 2011 to December 2012, were included. Retrospectively, pre-contrast phase and multiphase CT findings were reviewed and the detection rate of findings suggesting disease requiring significant treatment by noncontrast CT (NCCT) was compared with cases detected by multiphase CT. Approximately 70% of total subjects had a significant condition, including 1 case of gallbladder cancer and 126 (68.8%) cases requiring intervention (122 biliary stone-related diseases, 3 liver abscesses, and 1 liver hemangioma). The rate of overlooking malignancy without contrast enhancement was calculated to be 0% to 1.5%. Biliary stones and liver space-occupying lesions were found equally on NCCT and multiphase CT. Calculated probable rates of overlooking acute cholecystitis and biliary obstruction were maximally 6.8% and 4.2% respectively. Incidental significant finding unrelated with pain consisted of 1 case of adrenal incidentaloma, which was also observed in NCCT. NCCT might be sufficient to detect life-threatening or significant disease requiring early treatment in young adults with biliary pain. PMID:25700321

  18. Decreased length of stay after addition of healthcare provider in emergency department triage: a comparison between computer-simulated and real-world interventions

    PubMed Central

    Al-Roubaie, Abdul Rahim; Goldlust, Eric Jonathan

    2013-01-01

    Objective (1) To determine the effects of adding a provider in triage on average length of stay (LOS) and proportion of patients with >6 h LOS. (2) To assess the accuracy of computer simulation in predicting the magnitude of such effects on these metrics. Methods A group-level quasi-experimental trial comparing the St. Louis Veterans Affairs Medical Center emergency department (1) before intervention, (2) after institution of provider in triage, and discrete event simulation (DES) models of similar (3) ‘before’ and (4) ‘after’ conditions. The outcome measures were daily mean LOS and percentage of patients with LOS >6 h. Results The DES-modelled intervention predicted a decrease in the %6-hour LOS from 19.0% to 13.1%, and a drop in the daily mean LOS from 249 to 200 min (p<0.0001). Following (actual) intervention, the number of patients with LOS >6 h decreased from 19.9% to 14.3% (p<0.0001), with the daily mean LOS decreasing from 247 to 210 min (p<0.0001). Conclusion Physician and mid-level provider coverage at triage significantly reduced emergency department LOS in this setting. DES accurately predicted the magnitude of this effect. These results suggest further work in the generalisability of triage providers and in the utility of DES for predicting quantitative effects of process changes. PMID:22398851

  19. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator followed

  20. Archival Data and Computational Power in Planetary Astronomy: Lessons Learned 1979–2016 and a Vision for 2020–2050

    NASA Astrophysics Data System (ADS)

    Showalter, M. R.; Tiscareno, M. S.; French, R. S.

    2017-02-01

    Computing technology has advanced tremendously over recent decades. Projecting those trends forward, we explore ways that new technologies will change our approaches to planetary data analysis, using both archival data and that from future missions.

  1. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  2. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  3. Results from expert tests of the TP-100A boiler at the Lugansk thermal power station during the combustion of lean coal and anthracite culm with addition of RA-GEN-F anaklarid

    NASA Astrophysics Data System (ADS)

    Mikhailov, V. E.; Tupitsyn, S. P.; Sokolov, V. V.; Chebakova, G. F.; Malygin, V. I.; Yazykov, Yu. V.; Kharchenko, A. V.; Chetverikov, A. N.

    2012-08-01

    Results from expert tests of separated combustion of Grade T and Grade ASh anthracite culm in the TP-100A boiler No. 15 at the Lugansk thermal power station carried out with and without addition of RA-GEN-F anaklarid are presented. The possibility of extending the boiler load adjustment range and excluding the use of natural gas for supporting the flame at minimal loads is considered.

  4. Making classical and quantum canonical general relativity computable through a power series expansion in the inverse cosmological constant.

    PubMed

    Gambini, R; Pullin, J

    2000-12-18

    We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.

  5. Computer-aided design of the RF-cavity for a high-power S-band klystron

    NASA Astrophysics Data System (ADS)

    Kant, D.; Bandyopadhyay, A. K.; Pal, D.; Meena, R.; Nangru, S. C.; Joshi, L. M.

    2012-08-01

    This article describes the computer-aided design of the RF-cavity for a S-band klystron operating at 2856 MHz. State-of-the-art electromagnetic simulation tools SUPERFISH, CST Microwave studio, HFSS and MAGIC have been used for cavity design. After finalising the geometrical details of the cavity through simulation, it has been fabricated and characterised through cold testing. Detailed results of the computer-aided simulation and cold measurements are presented in this article.

  6. Regulatory use of computational toxicology tools and databases at the United States Food and Drug Administration's Office of Food Additive Safety.

    PubMed

    Arvidson, Kirk B; Chanderbhan, Ronald; Muldoon-Jacobs, Kristi; Mayer, Julie; Ogungbesan, Adejoke

    2010-07-01

    Over 10 years ago, the Office of Food Additive Safety (OFAS) in the FDA's Center for Food Safety and Applied Nutrition implemented the formal use of structure-activity relationship analysis and quantitative structure-activity relationship (QSAR) analysis in the premarket review of food-contact substances. More recently, OFAS has implemented the use of multiple QSAR software packages and has begun investigating the use of metabolism data and metabolism predictive models in our QSAR evaluations of food-contact substances. In this article, we provide an overview of the programs used in OFAS as well as a perspective on how to apply multiple QSAR tools in the review process of a new food-contact substance.

  7. Computer simulation for the growing probability of additional offspring with an advantageous reversal allele in the decoupled continuous-time mutation-selection model

    NASA Astrophysics Data System (ADS)

    Gill, Wonpyong

    2016-01-01

    This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.

  8. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data.

  9. Computational Design and Prototype Evaluation of Aluminide-Strengthened Ferritic Superalloys for Power-Generating Turbine Applications up to 1,033 K

    SciTech Connect

    Peter Liaw; Gautam Ghosh; Mark Asta; Morris Fine; Chain Liu

    2010-04-30

    prototype Fe-Ni-Cr-Al-Mo alloys. Three-point-bending experiments show that alloys containing more than 5 wt.% Al exhibit poor ductility (< 2%) at room temperature, and their fracture mode is predominantly of a cleavage type. Two major factors governing the poor ductility are (1) the volume fraction of NiAl-type precipitates, and (2) the Al content in the {alpha}-Fe matrix. A bend ductility of more than 5% can be achieved by lowering the Al concentration to 3 wt.% in the alloy. The alloy containing about 6.5 wt.% Al is found to have an optimal combination of hardness, ductility, and minimal creep rate at 973 K. A high volume fraction of precipitates is responsible for the good creep resistance by effectively resisting the dislocation motion through Orowan-bowing and dislocation-climb mechanisms. The effects of stress on the creep rate have been studied. With the threshold-stress compensation, the stress exponent is determined to be 4, indicating power-law dislocation creep. The threshold stress is in the range of 40-53 MPa. The addition of W can significantly reduce the secondary creep rates. Compared to other candidates for steam-turbine applications, FBB-8 does not show superior creep resistance at high stresses (> 100 MPa), but exhibit superior creep resistance at low stresses (< 60 MPa).

  10. Computational electronics and electromagnetics

    SciTech Connect

    Shang, C. C.

    1997-02-01

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domain CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.

  11. Assessment of solar options for small power systems applications. Volume V. SOLSTEP: a computer model for solar plant system simulations

    SciTech Connect

    Bird, S.P.

    1980-09-01

    The simulation code, SOLSTEP, was developed at the Pacific Northwest Laboratory to facilitate the evaluation of proposed designs for solar thermal power plants. It allows the user to analyze the thermodynamic and economic performance of a conceptual design for several field size-storage capacity configurations. This feature makes it possible to study the levelized energy cost of a proposed concept over a range of plant capacity factors. The thermodynamic performance is analyzed on a time step basis using actual recorded meteorological and insolation data for specific geographic locations. The flexibility of the model enables the user to analyze both central and distributed generation concepts using either thermal or electric storage systems. The thermodynamic and economic analyses view the plant in a macroscopic manner as a combination of component subsystems. In the thermodynamic simulation, concentrator optical performance is modeled as a function of solar position; other aspects of collector performance can optionally be treated as functions of ambient air temperature, wind speed, and component power level. The power conversion model accounts for the effects of ambient air temperature, partial load operation, auxiliary power demands, and plant standby and startup energy requirements. The code was designed in a modular fashion to provide efficient evaluations of the collector system, total plant, and system economics. SOLSTEP has been used to analyze a variety of solar thermal generic concepts involving several collector types and energy conversion and storage subsystems. The code's straightforward models and modular nature facilitated simple and inexpensive parametric studies of solar thermal power plant performance.

  12. Solitonic Gateless Computing

    DTIC Science & Technology

    2006-01-29

    logic functions and mathematical operations were implemented in the laboratory based on soliton collisions in photorefractive media. In addition to...the usual NAND and AND logic gates, soliton collisions do transfer information and two successive collisions can be made to mimic a unitary matrix or...clear proof-of- principle for soliton based optical computing functions , electronics has advanced in speed and power requirements in the last six years

  13. Computer-aided modeling and prediction of performance of the modified Lundell class of alternators in space station solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Demerdash, Nabeel A. O.; Wang, Ren-Hong

    1988-01-01

    The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.

  14. PLANETSYS, a Computer Program for the Steady State and Transient Thermal Analysis of a Planetary Power Transmission System: User's Manual

    NASA Technical Reports Server (NTRS)

    Hadden, G. B.; Kleckner, R. J.; Ragen, M. A.; Dyba, G. J.; Sheynin, L.

    1981-01-01

    The material presented is structured to guide the user in the practical and correct implementation of PLANETSYS which is capable of simulating the thermomechanical performance of a multistage planetary power transmission. In this version of PLANETSYS, the user can select either SKF or NASA models in calculating lubricant film thickness and traction forces.

  15. Computer Simulations of Contributions of Néel and Brown Relaxation to Specific Loss Power of Magnetic Fluids in Hyperthermia

    NASA Astrophysics Data System (ADS)

    Phong, Pham Thanh; Nguyen, Luu Huu; Manh, Do Hung; Lee, In-Ja; Phuc, Nguyen Xuan

    2017-04-01

    In this study, the degree of the contribution of particular relaxation losses to the specific loss power are calculated for a number of magnetic fluids, including Fe3O4, CoFe2O4, MnFe2O4, FeCo, FePt and La0.7Sr0.3MnO3 nanoparticles in various viscosities. We found that the specific loss of every fluid studied increases linearly with particle saturation magnetization. The competition between Néel and Brownian relaxation contributions gives rise to a peak at a critical diameter in the plot of specific loss power versus diameter. The critical diameter does not change with saturation magnetization but monotonically decreases with increasing magnetic anisotropy. If particle diameter is smaller than 6-11 nm, the maximum loss power tends to diminish and the heating effect to switch off. According to how the materials respond to viscosity change, the hyperthermia materials can be classified into two groups. One is hard nanoparticles with high anisotropy of which the critical diameter decreases with viscosity and the specific loss power versus saturation magnetization rate decreases strongly. The other is soft nanoparticles with low anisotropy of which the properties are insensitive to the viscosity of the fluid. We discuss our simulated results in relation to recent experimental findings.

  16. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  17. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  18. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  19. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    PubMed

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound.

  20. The Computer Fraud and Abuse Act of 1986. Hearing before the Committee on the Judiciary, United States Senate, Ninety-Ninth Congress, Second Session on S.2281, a Bill To Amend Title 18, United States Code, To Provide Additional Penalties for Fraud and Related Activities in Connection with Access Devices and Computers, and for Other Purposes.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on the Judiciary.

    The proposed legislation--S. 2281--would amend federal laws to provide additional penalties for fraud and related activities in connection with access devices and computers. The complete text of the bill and proceedings of the hearing are included in this report. Statements and materials submitted by the following committee members and witnesses…

  1. A Summary Description of a Computer Program Concept for the Design and Simulation of Solar Pond Electric Power Generation Systems

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.

  2. Computational Study of the Structure, the Flexibility, and the Electronic Circular Dichroism of Staurosporine - a Powerful Protein Kinase Inhibitor

    NASA Astrophysics Data System (ADS)

    Karabencheva-Christova, Tatyana G.; Singh, Warispreet; Christov, Christo Z.

    2014-07-01

    Staurosporine (STU) is a microbial alkaloid which is an universal kinase inhibitor. In order to understand its mechanism of action it is important to explore its structure-properties relationships. In this paper we provide the results of a computational study of the structure, the chiroptical properties, the conformational flexibility of STU as well as the correlation between the electronic circular dichroism (ECD) spectra and the structure of its complex with anaplastic lymphoma kinase.

  3. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    SciTech Connect

    Sunderam, V.S. . Dept. of Mathematics and Computer Science); Geist, G.A. )

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  4. Computer Simulation of Interactions between High-Power Electromagnetic Fields and Electronic Systems in a Complex Environment.

    DTIC Science & Technology

    1997-05-01

    and immune to interior resonance corruption. This work lays a foundation for the development of a very useful and powerful technique, which...show that the resulting solution has a good efficiency and accuracy and is completely immune to the problem of interior resonance . The technical...electromagnetic modeling for high-frequency MRI applications," International Society for Magnetic Resonance in Medicine Fifth Scientific Meeting, Vancouver, Canada

  5. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 2, User`s guide and manual

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.

  6. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 1, Equations and numerics

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User`s Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.

  7. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  8. High Power, Computer-Controlled, LED-Based Light Sources for Fluorescence Imaging and Image-Guided Surgery

    PubMed Central

    Gioux, Sylvain; Kianzad, Vida; Ciocan, Razvan; Gupta, Sunil; Oketokoun, Rafiou; Frangioni, John V.

    2009-01-01

    Optical imaging requires appropriate light sources. For image-guided surgery, and in particular fluorescence-guided surgery, high fluence rate, long working distance, computer control, and precise control of wavelength are required. In this study, we describe the development of light emitting diode (LED)-based light sources that meet these criteria. These light sources are enabled by a compact LED module that includes an integrated linear driver, heat-dissipation technology, and real-time temperature monitoring. Measuring only 27 mm W by 29 mm H, and weighing only 14.7 g, each module provides up to 6500 lx of white (400-650 nm) light and up to 157 mW of filtered fluorescence excitation light, while maintaining an operating temperature ≤ 50°C. We also describe software that can be used to design multi-module light housings, and an embedded processor that permits computer control and temperature monitoring. With these tools, we constructed a 76-module, sterilizable, 3-wavelength surgical light source capable of providing up to 40,000 lx of white light, 4.0 mW/cm2 of 670 nm near-infrared (NIR) fluorescence excitation light, and 14.0 mW/cm2 of 760 nm NIR fluorescence excitation light over a 15-cm diameter field-of-view. Using this light source, we demonstrate NIR fluorescence-guided surgery in a large animal model. PMID:19723473

  9. Computer power fathoms the depths: billion-bit data processors illuminate the subsurface. [3-D Seismic techniques

    SciTech Connect

    Ross, J.J.

    1985-01-01

    Some of the same space-age signal technology being used to track events 200 miles above the earth is helping petroleum explorationists track down oil and natural gas two miles and more down into the earth. The breakthroughs, which have come in a technique called three-dimensional seismic work, could change the complexion of exploration for oil and natural gas. Thanks to this 3-D seismic approach, explorationists can make dynamic maps of sites miles beneath the surface. Then explorationists can throw these maps on space-age computer systems and manipulate them every which way - homing in sharply on salt domes, faults, sands and traps associated with oil and natural gas. ''The 3-D seismic scene has exploded within the last two years,'' says, Peiter Tackenberg, Marathon technical consultant who deals with both domestic and international exploration. The 3-D technique has been around for more than a decade, he notes, but recent achievements in space-age computer hardware and software have unlocked its full potential.

  10. An integrated experimental and computational approach to material selection for sound proof thermally insulted enclosure of a power generation system

    NASA Astrophysics Data System (ADS)

    Waheed, R.; Tarar, W.; Saeed, H. A.

    2016-08-01

    Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.

  11. GETRAN: A generic, modularly structured computer code for simulation of dynamic behavior of aero- and power generation gas turbine engines

    NASA Astrophysics Data System (ADS)

    Schobeiri, M. T.; Attia, M.; Lippke, C.

    1994-07-01

    The design concept, the theoretical background essential for the development of the modularly structured simulation code GETRAN, and several critical simulation cases are presented in this paper. The code being developed under contract with NASA Lewis Research Center is capable of simulating the nonlinear dynamic behavior of single- and multispool core engines, turbofan engines, and power generation gas turbine engines under adverse dynamic operating conditions. The modules implemented into GETRAN correspond to components of existing and new-generation aero- and stationary gas turbine engines with arbitrary configuration and arrangement. For precise simulation of turbine and compressor components, row-by-row diabatic and adiabatic calculation procedures are implemented that account for the specific turbine and compressor cascade, blade geometry, and characteristics. The nonlinear, dynamic behavior of the subject engine is calculated solving a number of systems of partial differential equations, which describe the unsteady behavior of each component individually. To identify each differential equation system unambiguously, special attention is paid to the addressing of each component. The code is capable of executing the simulation procedure at four levels, which increase with the degree of complexity of the system and dynamic event. As representative simulations, four different transient cases with single- and multispool thrust and power generation engines were simulated. These transient cases vary from throttling the exit nozzle area, operation with fuel schedule, rotor speed control, to rotating stall and surge.

  12. Polylactides in additive biomanufacturing.

    PubMed

    Poh, Patrina S P; Chhaya, Mohit P; Wunner, Felix M; De-Juan-Pardo, Elena M; Schilling, Arndt F; Schantz, Jan-Thorsten; van Griensven, Martijn; Hutmacher, Dietmar W

    2016-12-15

    New advanced manufacturing technologies under the alias of additive biomanufacturing allow the design and fabrication of a range of products from pre-operative models, cutting guides and medical devices to scaffolds. The process of printing in 3 dimensions of cells, extracellular matrix (ECM) and biomaterials (bioinks, powders, etc.) to generate in vitro and/or in vivo tissue analogue structures has been termed bioprinting. To further advance in additive biomanufacturing, there are many aspects that we can learn from the wider additive manufacturing (AM) industry, which have progressed tremendously since its introduction into the manufacturing sector. First, this review gives an overview of additive manufacturing and both industry and academia efforts in addressing specific challenges in the AM technologies to drive toward AM-enabled industrial revolution. After which, considerations of poly(lactides) as a biomaterial in additive biomanufacturing are discussed. Challenges in wider additive biomanufacturing field are discussed in terms of (a) biomaterials; (b) computer-aided design, engineering and manufacturing; (c) AM and additive biomanufacturing printers hardware; and (d) system integration. Finally, the outlook for additive biomanufacturing was discussed.

  13. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    SciTech Connect

    Abrahamson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  14. Application of computational neural networks in predicting atmospheric pollutant concentrations due to fossil-fired electric power generation

    SciTech Connect

    El-Hawary, F.

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net based system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  15. Evaluating the Discriminatory Power of a Computer-based System for Assessing Penetrating Trauma on Retrospective Multi-Center Data

    PubMed Central

    Matheny, Michael E.; Ogunyemi, Omolola I.; Rice, Phillip L.; Clarke, John R.

    2005-01-01

    Objective To evaluate the discriminatory power of TraumaSCAN-Web, a system for assessing penetrating trauma, using retrospective multi-center case data for gunshot and stab wounds to the thorax and abdomen. Methods 80 gunshot and 114 stab cases were evaluated using TraumaSCAN-Web. Areas under the Receiver Operator Characteristic Curves (AUC) were calculated for each condition modeled in TraumaSCAN-Web. Results Of the 23 conditions modeled by TraumaSCAN-Web, 19 were present in either the gunshot or stab case data. The gunshot AUCs ranged from 0.519 (pericardial tamponade) to 0.975 (right renal injury). The stab AUCs ranged from 0.701 (intestinal injury) to 1.000 (tracheal injury). PMID:16779090

  16. Power management system

    DOEpatents

    Algrain, Marcelo C.; Johnson, Kris W.; Akasam, Sivaprasad; Hoff, Brian D.

    2007-10-02

    A method of managing power resources for an electrical system of a vehicle may include identifying enabled power sources from among a plurality of power sources in electrical communication with the electrical system and calculating a threshold power value for the enabled power sources. A total power load placed on the electrical system by one or more power consumers may be measured. If the total power load exceeds the threshold power value, then a determination may be made as to whether one or more additional power sources is available from among the plurality of power sources. At least one of the one or more additional power sources may be enabled, if available.

  17. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  18. An investigation on the effect of second-order additional thickness distributions to the upper surface of an NACA 64 sub 1-212 airfoil. [using flow equations and a CDC 7600 digital computer

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of an NACA 64 sub 1 - 212 airfoil. Additional thickness distributions employed were in the form of two second-order polynomial arcs which have a specified thickness at a given chordwise location. The forward arc disappears at the airfoil leading edge, the aft arc disappears at the airfoil trailing edge. At the juncture of the two arcs, x = x, continuity of slope is maintained. The effect of varying the maximum additional thickness and its chordwise location on airfoil lift coefficient, pitching moment, and pressure distribution was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic NACA 64 sub 1 - 212 airfoil, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  19. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  20. An investigation on the effect of second-order additional thickness distributions to the upper surface of an NACA 64-206 airfoil. [using flow equations and a CDC 7600 digital computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of an NACA 64-206 airfoil. Additional thickness distributions employed were in the form of two second-order polynomial arcs which have a specified thickness at a given chordwise location. The forward arc disappears at the airfoil leading edge, the aft arc disappears at the airfoil trailing edge. At the juncture of the two arcs, x = x, continuity of slope is maintained. The effect of varying the maximum additional thickness and its chordwise location on airfoil lift coefficient, pitching moment, and pressure distribution was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic NACA 64-206 airfoil, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  1. Computational and experimental progress on laser-activated gas avalanche switches for broadband, high-power electromagnetic pulse generation

    SciTech Connect

    Mayhall, D.J.; Yee, J.H. ); Villa, F. )

    1990-09-01

    The gas avalanche switch, a high-voltage, picosecond-speed switch, has been proposed. The basic switch consists of pulse-charged electrodes, immersed in a high-pressure (7--800 atm) gas. An avalanche discharge is induced in the gas between the electrodes by ionization from a picosecond-scale laser pulse. The avalanching electrons move toward the anode, causing the applied voltage to collapse in picoseconds. This voltage collapse, if rapid enough, generates electromagnetic waves. A two-dimensional (2D), finite difference computer code solves Maxwell's equations for transverse magnetic modes for rectilinear electrodes between parallel plate conductors, along with electron conservation equations for continuity, momentum, and energy. Collision frequencies for ionization and momentum and energy transfer to neutral molecules are assumed to scale linearly with neutral pressure. Electrode charging and laser-driven electron deposition are assumed to be instantaneous. Code calculations are done for a pulse generator geometry, consisting of an 0.7 mm wide by 0.8 mm high, beveled, rectangular center electrode between grounded parallel plates at 2 mm spacing in air. 17 refs., 12 figs., 2 tabs.

  2. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

    SciTech Connect

    Katya Le Blanc; Johanna Oxstrand

    2012-05-01

    Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

  3. Computational Study of the Impact of Unsteadiness on the Aerodynamic Performance of a Variable- Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2012-01-01

    The design-point and off-design performance of an embedded 1.5-stage portion of a variable-speed power turbine (VSPT) was assessed using Reynolds-Averaged Navier-Stokes (RANS) analyses with mixing-planes and sector-periodic, unsteady RANS analyses. The VSPT provides one means by which to effect the nearly 50 percent main-rotor speed change required for the NASA Large Civil Tilt-Rotor (LCTR) application. The change in VSPT shaft-speed during the LCTR mission results in blade-row incidence angle changes of as high as 55 . Negative incidence levels of this magnitude at takeoff operation give rise to a vortical flow structure in the pressure-side cove of a high-turn rotor that transports low-momentum flow toward the casing endwall. The intent of the effort was to assess the impact of unsteadiness of blade-row interaction on the time-mean flow and, specifically, to identify potential departure from the predicted trend of efficiency with shaft-speed change of meanline and 3-D RANS/mixing-plane analyses used for design.

  4. Mapping hidden potential identity elements by computing the average discriminating power of individual tRNA positions.

    PubMed

    Szenes, Aron; Pál, Gábor

    2012-06-01

    The recently published discrete mathematical method, extended consensus partition (ECP), identifies nucleotide types at each position that are strictly absent from a given sequence set, while occur in other sets. These are defined as discriminating elements (DEs). In this study using the ECP approach, we mapped potential hidden identity elements that discriminate the 20 different tRNA identities. We filtered the tDNA data set for the obligatory presence of well-established tRNA features, and then separately for each identity set, the presence of already experimentally identified strictly present identity elements. The analysis was performed on the three kingdoms of life. We determined the number of DE, e.g. the number of sets discriminated by the given position, for each tRNA position of each tRNA identity set. Then, from the positional DE numbers obtained from the 380 pairwise comparisons of the 20 identity sets, we calculated the average excluding value (AEV) for each tRNA position. The AEV provides a measure on the overall discriminating power of each position. Using a statistical analysis, we show that positional AEVs correlate with the number of already identified identity elements. Positions having high AEV but lacking published identity elements predict hitherto undiscovered tRNA identity elements.

  5. Mapping Hidden Potential Identity Elements by Computing the Average Discriminating Power of Individual tRNA Positions

    PubMed Central

    Szenes, Áron; Pál, Gábor

    2012-01-01

    The recently published discrete mathematical method, extended consensus partition (ECP), identifies nucleotide types at each position that are strictly absent from a given sequence set, while occur in other sets. These are defined as discriminating elements (DEs). In this study using the ECP approach, we mapped potential hidden identity elements that discriminate the 20 different tRNA identities. We filtered the tDNA data set for the obligatory presence of well-established tRNA features, and then separately for each identity set, the presence of already experimentally identified strictly present identity elements. The analysis was performed on the three kingdoms of life. We determined the number of DE, e.g. the number of sets discriminated by the given position, for each tRNA position of each tRNA identity set. Then, from the positional DE numbers obtained from the 380 pairwise comparisons of the 20 identity sets, we calculated the average excluding value (AEV) for each tRNA position. The AEV provides a measure on the overall discriminating power of each position. Using a statistical analysis, we show that positional AEVs correlate with the number of already identified identity elements. Positions having high AEV but lacking published identity elements predict hitherto undiscovered tRNA identity elements. PMID:22378766

  6. Transition Metal Diborides as Electrode Material for MHD Direct Power Extraction: High-temperature Oxidation of ZrB2-HfB2 Solid Solution with LaB6 Addition

    NASA Astrophysics Data System (ADS)

    Sitler, Steven; Hill, Cody; Raja, Krishnan S.; Charit, Indrajit

    2016-06-01

    Transition metal borides are being considered for use as potential electrode coating materials in magnetohydrodynamic direct power extraction plants from coal-fired plasma. These electrode materials will be exposed to aggressive service conditions at high temperatures. Therefore, high-temperature oxidation resistance is an important property. Consolidated samples containing an equimolar solid solution of ZrB2-HfB2 with and without the addition of 1.8 mol pct LaB6 were prepared by ball milling of commercial boride material followed by spark plasma sintering. These samples were oxidized at 1773 K (1500 °C) in two different conditions: (1) as-sintered and (2) anodized (10 V in 0.1 M KOH electrolyte). Oxidation studies were carried out in 0.3 × 105 and 0.1 Pa oxygen partial pressures. The anodic oxide layers showed hafnium enrichment on the surface of the samples, whereas the high-temperature oxides showed zirconium enrichment. The anodized samples without LaB6 addition showed about 2.5 times higher oxidation resistance in high-oxygen partial pressures than the as-sintered samples. Addition of LaB6 improved the oxidation resistance in the as-sintered condition by about 30 pct in the high-oxygen partial pressure tests.

  7. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  8. Fermi Observations of GRB 090510: A Short-Hard Gamma-ray Burst with an Additional, Hard Power-law Component from 10 keV TO GeV Energies

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Asano, K.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Baring, M. G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Bhat, P. N.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bouvier, A.; Bregeon, J.; Brez, A.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Carrigan, S.; Casandjian, J. M.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Dermer, C. D.; de Palma, F.; Dingus, B. L.; Silva, E. do Couto e.; Drell, P. S.; Dubois, R.; Dumora, D.; Farnier, C.; Favuzzi, C.; Fegan, S. J.; Finke, J.; Focke, W. B.; Frailis, M.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giglietto, N.; Giordano, F.; Glanzman, T.; Godfrey, G.; Granot, J.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Harding, A. K.; Hays, E.; Horan, D.; Hughes, R. E.; Jóhannesson, G.; Johnson, W. N.; Kamae, T.; Katagiri, H.; Kataoka, J.; Kawai, N.; Kippen, R. M.; Knödlseder, J.; Kocevski, D.; Kouveliotou, C.; Kuss, M.; Lande, J.; Latronico, L.; Lemoine-Goumard, M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Makeev, A.; Mazziotta, M. N.; McEnery, J. E.; McGlynn, S.; Meegan, C.; Mészáros, P.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monte, C.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakajima, H.; Nakamori, T.; Nolan, P. L.; Norris, J. P.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paciesas, W. S.; Paneque, D.; Panetta, J. H.; Parent, D.; Pelassa, V.; Pepe, M.; Pesce-Rollins, M.; Piron, F.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Ritz, S.; Rodriguez, A. Y.; Roth, M.; Ryde, F.; Sadrozinski, H. F.-W.; Sander, A.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Siskind, E. J.; Smith, P. D.; Spandre, G.; Spinelli, P.; Stamatikos, M.; Stecker, F. W.; Strickman, M. S.; Suson, D. J.; Tajima, H.; Takahashi, H.; Takahashi, T.; Tanaka, T.; Thayer, J. B.; Thayer, J. G.; Thompson, D. J.; Tibaldo, L.; Toma, K.; Torres, D. F.; Tosti, G.; Tramacere, A.; Uchiyama, Y.; Uehara, T.; Usher, T. L.; van der Horst, A. J.; Vasileiou, V.; Vilchez, N.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wang, P.; Wilson-Hodge, C.; Winer, B. L.; Wu, X. F.; Yamazaki, R.; Yang, Z.; Ylinen, T.; Ziegler, M.

    2010-06-01

    We present detailed observations of the bright short-hard gamma-ray burst GRB 090510 made with the Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) on board the Fermi observatory. GRB 090510 is the first burst detected by the LAT that shows strong evidence for a deviation from a Band spectral fitting function during the prompt emission phase. The time-integrated spectrum is fit by the sum of a Band function with E peak = 3.9 ± 0.3 MeV, which is the highest yet measured, and a hard power-law component with photon index -1.62 ± 0.03 that dominates the emission below ≈20 keV and above ≈100 MeV. The onset of the high-energy spectral component appears to be delayed by ~0.1 s with respect to the onset of a component well fit with a single Band function. A faint GBM pulse and a LAT photon are detected 0.5 s before the main pulse. During the prompt phase, the LAT detected a photon with energy 30.5+5.8 -2.6 GeV, the highest ever measured from a short GRB. Observation of this photon sets a minimum bulk outflow Lorentz factor, Γgsim 1200, using simple γγ opacity arguments for this GRB at redshift z = 0.903 and a variability timescale on the order of tens of ms for the ≈100 keV-few MeV flux. Stricter high confidence estimates imply Γ >~ 1000 and still require that the outflows powering short GRBs are at least as highly relativistic as those of long-duration GRBs. Implications of the temporal behavior and power-law shape of the additional component on synchrotron/synchrotron self-Compton, external-shock synchrotron, and hadronic models are considered.

  9. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Simulation of SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin

    2009-11-01

    A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.

  10. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also

  11. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  12. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 33.10 Section 33.10 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT APPLICATIONS UNDER FEDERAL POWER ACT SECTION...

  13. Demographic inferences using short-read genomic data in an approximate Bayesian computation framework: in silico evaluation of power, biases and proof of concept in Atlantic walrus.

    PubMed

    Shafer, Aaron B A; Gattepaille, Lucie M; Stewart, Robert E A; Wolf, Jochen B W

    2015-01-01

    Approximate Bayesian computation (ABC) is a powerful tool for model-based inference of demographic histories from large genetic data sets. For most organisms, its implementation has been hampered by the lack of sufficient genetic data. Genotyping-by-sequencing (GBS) provides cheap genome-scale data to fill this gap, but its potential has not fully been exploited. Here, we explored power, precision and biases of a coalescent-based ABC approach where GBS data were modelled with either a population mutation parameter (θ) or a fixed site (FS) approach, allowing single or several segregating sites per locus. With simulated data ranging from 500 to 50 000 loci, a variety of demographic models could be reliably inferred across a range of timescales and migration scenarios. Posterior estimates were informative with 1000 loci for migration and split time in simple population divergence models. In more complex models, posterior distributions were wide and almost reverted to the uninformative prior even with 50 000 loci. ABC parameter estimates, however, were generally more accurate than an alternative composite-likelihood method. Bottleneck scenarios proved particularly difficult, and only recent bottlenecks without recovery could be reliably detected and dated. Notably, minor-allele-frequency filters - usual practice for GBS data - negatively affected nearly all estimates. With this in mind, we used a combination of FS and θ approaches on empirical GBS data generated from the Atlantic walrus (Odobenus rosmarus rosmarus), collectively providing support for a population split before the last glacial maximum followed by asymmetrical migration and a high Arctic bottleneck. Overall, this study evaluates the potential and limitations of GBS data in an ABC-coalescence framework and proposes a best-practice approach.

  14. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  15. Synchrotron-Based X-ray Microtomography Characterization of the Effect of Processing Variables on Porosity Formation in Laser Power-Bed Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Cunningham, Ross; Narra, Sneha P.; Montgomery, Colt; Beuth, Jack; Rollett, A. D.

    2017-01-01

    The porosity observed in additively manufactured (AM) parts is a potential concern for components intended to undergo high-cycle fatigue without post-processing to remove such defects. The morphology of pores can help identify their cause: irregularly shaped lack of fusion or key-holing pores can usually be linked to incorrect processing parameters, while spherical pores suggest trapped gas. Synchrotron-based x-ray microtomography was performed on laser powder-bed AM Ti-6Al-4V samples over a range of processing conditions to investigate the effects of processing parameters on porosity. The process mapping technique was used to control melt pool size. Tomography was also performed on the powder to measure porosity within the powder that may transfer to the parts. As observed previously in experiments with electron beam powder-bed fabrication, significant variations in porosity were found as a function of the processing parameters. A clear connection between processing parameters and resulting porosity formation mechanism was observed in that inadequate melt pool overlap resulted in lack-of-fusion pores whereas excess power density produced keyhole pores.

  16. Synchrotron-Based X-ray Microtomography Characterization of the Effect of Processing Variables on Porosity Formation in Laser Power-Bed Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Cunningham, Ross; Narra, Sneha P.; Montgomery, Colt; Beuth, Jack; Rollett, A. D.

    2017-03-01

    The porosity observed in additively manufactured (AM) parts is a potential concern for components intended to undergo high-cycle fatigue without post-processing to remove such defects. The morphology of pores can help identify their cause: irregularly shaped lack of fusion or key-holing pores can usually be linked to incorrect processing parameters, while spherical pores suggest trapped gas. Synchrotron-based x-ray microtomography was performed on laser powder-bed AM Ti-6Al-4V samples over a range of processing conditions to investigate the effects of processing parameters on porosity. The process mapping technique was used to control melt pool size. Tomography was also performed on the powder to measure porosity within the powder that may transfer to the parts. As observed previously in experiments with electron beam powder-bed fabrication, significant variations in porosity were found as a function of the processing parameters. A clear connection between processing parameters and resulting porosity formation mechanism was observed in that inadequate melt pool overlap resulted in lack-of-fusion pores whereas excess power density produced keyhole pores.

  17. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  18. Power system

    DOEpatents

    Hickam, Christopher Dale

    2008-03-18

    A power system includes a prime mover, a transmission, and a fluid coupler having a selectively engageable lockup clutch. The fluid coupler may be drivingly connected between the prime mover and the transmission. Additionally, the power system may include a motor/generator drivingly connected to at least one of the prime mover and the transmission. The power-system may also include power-system controls configured to execute a control method. The control method may include selecting one of a plurality of modes of operation of the power system. Additionally, the control method may include controlling the operating state of the lockup clutch dependent upon the mode of operation selected. The control method may also include controlling the operating state of the motor/generator dependent upon the mode of operation selected.

  19. Pulsar discovery by global volunteer computing.

    PubMed

    Knispel, B; Allen, B; Cordes, J M; Deneva, J S; Anderson, D; Aulbert, C; Bhat, N D R; Bock, O; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Crawford, F; Demorest, P B; Fehrmann, H; Freire, P C C; Gonzalez, M E; Hammer, D; Hessels, J W T; Jenet, F A; Kasian, L; Kaspi, V M; Kramer, M; Lazarus, P; van Leeuwen, J; Lorimer, D R; Lyne, A G; Machenschalk, B; McLaughlin, M A; Messenger, C; Nice, D J; Papa, M A; Pletsch, H J; Prix, R; Ransom, S M; Siemens, X; Stairs, I H; Stappers, B W; Stovall, K; Venkataraman, A

    2010-09-10

    Einstein@Home aggregates the computer power of hundreds of thousands of volunteers from 192 countries to mine large data sets. It has now found a 40.8-hertz isolated pulsar in radio survey data from the Arecibo Observatory taken in February 2007. Additional timing observations indicate that this pulsar is likely a disrupted recycled pulsar. PSR J2007+2722's pulse profile is remarkably wide with emission over almost the entire spin period; the pulsar likely has closely aligned magnetic and spin axes. The massive computing power provided by volunteers should enable many more such discoveries.

  20. Autoantibody Signature Enhances the Positive Predictive Power of Computed Tomography and Nodule-Based Risk Models for Detection of Lung Cancer

    PubMed Central

    Massion, Pierre P.; Healey, Graham F.; Peek, Laura J.; Fredericks, Lynn; Sewell, Herb F.; Murray, Andrea; Robertson, John F. R.

    2017-01-01

    Introduction The incidence of pulmonary nodules is increasing with the movement toward screening for lung cancer by low-dose computed tomography. Given the large number of benign nodules detected by computed tomography, an adjunctive test capable of distinguishing malignant from benign nodules would benefit practitioners. The ability of the EarlyCDT-Lung blood test (Oncimmune Ltd., Nottingham, United Kingdom) to make this distinction by measuring autoantibodies to seven tumor-associated antigens was evaluated in a prospective registry. Methods Of the members of a cohort of 1987 individuals with Health Insurance Portability and Accountability Act authorization, those with pulmonary nodules detected, imaging, and pathology reports were reviewed. All patients for whom a nodule was identified within 6 months of testing by EarlyCDT-Lung were included. The additivity of the test to nodule size and nodule-based risk models was explored. Results A total of 451 patients (32%) had at least one nodule, leading to 296 eligible patients after exclusions, with a lung cancer prevalence of 25%. In 4- to 20-mm nodules, a positive test result represented a greater than twofold increased relative risk for development of lung cancer as compared with a negative test result. Also, when the “both-positive rule” for combining binary tests was used, adding EarlyCDT-Lung to risk models improved diagnostic performance with high specificity (>92%) and positive predictive value (>70%). Conclusions A positive autoantibody test result reflects a significant increased risk for malignancy in lung nodules 4 to 20 mm in largest diameter. These data confirm that EarlyCDT-Lung may add value to the armamentarium of the practitioner in assessing the risk for malignancy in indeterminate pulmonary nodules. PMID:27615397

  1. Teardrop bladder: additional considerations

    SciTech Connect

    Wechsler, R.J.; Brennan, R.E.

    1982-07-01

    Nine cases of teardrop bladder (TDB) seen at excretory urography are presented. In some of these patients, the iliopsoas muscles were at the upper limit of normal in size, and additional evaluation of the perivesical structures with computed tomography (CT) was necessary. CT demonstrated only hypertrophied muscles with or without perivesical fat. The psoas muscles and pelvic width were measured in 8 patients and compared with the measurements of a control group of males without TDB. Patients with TDB had large iliopsoas muscles and narrow pelves compared with the control group. The psoas muscle width/pelvic width ratio was significantly greater (p < 0.0005) in patients with TDB than in the control group, with values of 1.04 + 0.05 and 0.82 + 0.09, respectively. It is concluded that TDB is not an uncommon normal variant in black males. Both iliopsoas muscle hypertrophy and a narrow pelvis are factors that predispose a patient to TDB.

  2. Teaching Physics with Computers

    NASA Astrophysics Data System (ADS)

    Botet, R.; Trizac, E.

    2005-09-01

    Computers are now so common in our everyday life that it is difficult to imagine the computer-free scientific life of the years before the 1980s. And yet, in spite of an unquestionable rise, the use of computers in the realm of education is still in its infancy. This is not a problem with students: for the new generation, the pre-computer age seems as far in the past as the the age of the dinosaurs. It may instead be more a question of teacher attitude. Traditional education is based on centuries of polished concepts and equations, while computers require us to think differently about our method of teaching, and to revise the content accordingly. Our brains do not work in terms of numbers, but use abstract and visual concepts; hence, communication between computer and man boomed when computers escaped the world of numbers to reach a visual interface. From this time on, computers have generated new knowledge and, more importantly for teaching, new ways to grasp concepts. Therefore, just as real experiments were the starting point for theory, virtual experiments can be used to understand theoretical concepts. But there are important differences. Some of them are fundamental: a virtual experiment may allow for the exploration of length and time scales together with a level of microscopic complexity not directly accessible to conventional experiments. Others are practical: numerical experiments are completely safe, unlike some dangerous but essential laboratory experiments, and are often less expensive. Finally, some numerical approaches are suited only to teaching, as the concept necessary for the physical problem, or its solution, lies beyond the scope of traditional methods. For all these reasons, computers open physics courses to novel concepts, bringing education and research closer. In addition, and this is not a minor point, they respond naturally to the basic pedagogical needs of interactivity, feedback, and individualization of instruction. This is why one can

  3. Computation Directorate 2008 Annual Report

    SciTech Connect

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

  4. Is it useful to combine sputum cytology and low-dose spiral computed tomography for early detection of lung cancer in formerly asbestos-exposed power industry workers?

    PubMed Central

    2014-01-01

    Background Low-dose spiral computed tomography (LDSCT) in comparison to conventional chest X-ray proved to be a highly sensitive method of diagnosing early stage lung cancer. However, centrally located early stage lung tumours remain a diagnostic challenge. We determined the practicability and efficacy of early detection of lung cancer when combining LDSCT and sputum cytology. Methods Of a cohort of 4446 formerly asbestos exposed power industry workers, we examined a subgroup of 187 (4.2%) high risk participants for lung cancer at least once with both LDSCT and sputum cytology. After the examination period the participants were followed-up for more than three years. Results The examinations resulted in the diagnosis of lung cancer in 12 participants (6.4%). Six were in clinical stage I. We found 10 non-small cell lung carcinomas and one small cell lung carcinoma. Sputum specimens showed suspicious pathological findings in seven cases and in 11 cases the results of LDSCT indicated malignancies. The overall sensitivity and specificity of sputum cytology was 58.0% and 98% with positive (PPV) and negative (NPV) predictive values of 70% and 97%. For LDSCT we calculated the sensitivity and specificity of 92% and 97%. The PPV and NPV were 65% and 99% respectively. Conclusions Our results confirmed that in surveillance programmes a combination of sputum cytology and LDSCT is well feasible and accepted by the participants. Sputum examination alone is not effective enough for the detection of lung cancer, especially at early stage. Even in well- defined risk groups highly exposed to asbestos, we cannot recommend the use of combined LDSCT and sputum cytology examinations as long as no survival benefit has been proved for the combination of both methods. For ensuring low rates of false-positive and false-negative results, programme planners must closely cooperate with experienced medical practitioners and pathologists in a well-functioning interdisciplinary network. PMID

  5. Electric power exchanges with sensitivity matrices: an experimental analysis

    SciTech Connect

    Drozdal, Martin

    2001-01-01

    We describe a fast and incremental method for power flows computation. Fast in the sense that it can be used for real time power flows computation, and incremental in the sense that it computes any additional increase/decrease in line congestion caused by a particular contract. This is, to our best knowledge, the only suitable method for real time power flows computation, that at the same time offers a powerful way of dealing with congestion contingency. Many methods for this purpose have been designed, or thought of, but those either lack speed or being incremental, or have never been coded and tested. The author is in the process of obtaining a patent on methods, algorithms, and procedures described in this paper.

  6. Dynamic power flow controllers

    DOEpatents

    Divan, Deepakraj M.; Prasai, Anish

    2017-03-07

    Dynamic power flow controllers are provided. A dynamic power flow controller may comprise a transformer and a power converter. The power converter is subject to low voltage stresses and not floated at line voltage. In addition, the power converter is rated at a fraction of the total power controlled. A dynamic power flow controller controls both the real and the reactive power flow between two AC sources having the same frequency. A dynamic power flow controller inserts a voltage with controllable magnitude and phase between two AC sources; thereby effecting control of active and reactive power flows between two AC sources.

  7. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals

    PubMed Central

    Chandran KS, Subhash; Mishra, Ashutosh; Shirhatti, Vinay

    2016-01-01

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. PMID:27013668

  8. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  9. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  10. Add 16-bit processing to any computer

    SciTech Connect

    Fry, W.

    1983-01-01

    A zoom computer is a simple, fast, and friendly computer in a very small package. Zoom architecture provides an easy migration path from existing 8-bit computers to today's 16-bit and tomorrow's 32-bit designs. With zoom, the benefits of the VLSI technological explosion can be attained with your present peripherals-there is no need to purchase new peripherals because all your old applications run unhindered on zoom. And in addition to all your old applications, zoom offers a whole new world of processing power at your fingertips.

  11. Low-Power Public Key Cryptography

    SciTech Connect

    BEAVER,CHERYL L.; DRAELOS,TIMOTHY J.; HAMILTON,VICTORIA A.; SCHROEPPEL,RICHARD C.; GONZALES,RITA A.; MILLER,RUSSELL D.; THOMAS,EDWARD V.

    2000-11-01

    This report presents research on public key, digital signature algorithms for cryptographic authentication in low-powered, low-computation environments. We assessed algorithms for suitability based on their signature size, and computation and storage requirements. We evaluated a variety of general purpose and special purpose computing platforms to address issues such as memory, voltage requirements, and special functionality for low-powered applications. In addition, we examined custom design platforms. We found that a custom design offers the most flexibility and can be optimized for specific algorithms. Furthermore, the entire platform can exist on a single Application Specific Integrated Circuit (ASIC) or can be integrated with commercially available components to produce the desired computing platform.

  12. Multichannel Phase and Power Detector

    NASA Technical Reports Server (NTRS)

    Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy

    2006-01-01

    An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals

  13. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS §...

  14. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS §...

  15. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS §...

  16. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS §...

  17. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS §...

  18. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  19. Fast algorithm for computing a primitive /2 to power p + 1/p-th root of unity in GF/q squared/

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1978-01-01

    A quick method is described for finding the primitive (2 to power p + 1)p-th root of unity in the Galois field GF(q squared), where q = (2 to power p) - 1 and is known as a Mersenne prime. Determination of this root is necessary to implement complex integer transforms of length (2 to power k) times p over the Galois field, with k varying between 3 and p + 1.

  20. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  1. Computer Programs for Producing Single-Event Aircraft Noise Data for Specific Engine Power and Meteorological Conditions for Use with USAF (United States Air Force) Community Noise Model (NOISEMAP).

    DTIC Science & Technology

    1983-04-01

    34 " AFAMRL-TR-83-020 COMPUTER PROGRAMS FOR PRODUCING SINGLE-EVENT AIRCRAFT NOISE DATA FOR SPECIFIC ENGINE POWER * AND... any purpose other than a i4" padon#1th Goveremagn thereby incurs no responsibility ** tevewauIwt may have formulated, furnishe, or ore data, is, not...to be- regarded by ’thew holder or Aany other prson or corporation, or Usae, or sell any patented invention that may in any Akv twt. Amxopace Medical

  2. Writing, Computers, and Gender.

    ERIC Educational Resources Information Center

    Beer, Ann

    1994-01-01

    Uses brief accounts by undergraduate students of their experiences with computers and word processing to investigate gender-related differences in attitudes toward computers and to explore why computers seem to reflect back to many women a learned sense of technical incompetence. Focuses on themes of power, caring, and self-esteem. (SR)

  3. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications.

  4. Quantum computers.

    PubMed

    Ladd, T D; Jelezko, F; Laflamme, R; Nakamura, Y; Monroe, C; O'Brien, J L

    2010-03-04

    Over the past several decades, quantum information science has emerged to seek answers to the question: can we gain some advantage by storing, transmitting and processing information encoded in systems that exhibit unique quantum properties? Today it is understood that the answer is yes, and many research groups around the world are working towards the highly ambitious technological goal of building a quantum computer, which would dramatically improve computational power for particular tasks. A number of physical systems, spanning much of modern physics, are being developed for quantum computation. However, it remains unclear which technology, if any, will ultimately prove successful. Here we describe the latest developments for each of the leading approaches and explain the major challenges for the future.

  5. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  6. An iron–oxygen intermediate formed during the catalytic cycle of cysteine dioxygenase† †Electronic supplementary information (ESI) available: Experimental and computational details. See DOI: 10.1039/c6cc03904a Click here for additional data file.

    PubMed Central

    Tchesnokov, E. P.; Faponle, A. S.; Davies, C. G.; Quesne, M. G.; Turner, R.; Fellner, M.; Souness, R. J.; Wilbanks, S. M.

    2016-01-01

    Cysteine dioxygenase is a key enzyme in the breakdown of cysteine, but its mechanism remains controversial. A combination of spectroscopic and computational studies provides the first evidence of a short-lived intermediate in the catalytic cycle. The intermediate decays within 20 ms and has absorption maxima at 500 and 640 nm. PMID:27297454

  7. Argonne's Laboratory computing center - 2007 annual report.

    SciTech Connect

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

  8. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  9. The Computing World

    DTIC Science & Technology

    1992-04-01

    to modern computers. In the 1840s Augusta Ada, the Countess of Lovelace, translated and wrote several scientific papers regarding Charles P...Babbage’s ideas on an analytical engine problem solving machine. 3 Babbage , "the father of computers," developed ways to store results via memory devices. Ada...simple arithmetic calculating machines to small complex and powerful integrated circuit computers. Jacquard, Babbage and Lovelace set the computer

  10. Circuit for Communication Over Power Lines

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Normal F.; Greer, Lawrence C., III; Nappier, Jennifer

    2011-01-01

    Many distributed systems share common sensors and instruments along with a common power line supplying current to the system. A communication technique and circuit has been developed that allows for the simple inclusion of an instrument, sensor, or actuator node within any system containing a common power bus. Wherever power is available, a node can be added, which can then draw power for itself, its associated sensors, and actuators from the power bus all while communicating with other nodes on the power bus. The technique modulates a DC power bus through capacitive coupling using on-off keying (OOK), and receives and demodulates the signal from the DC power bus through the same capacitive coupling. The circuit acts as serial modem for the physical power line communication. The circuit and technique can be made of commercially available components or included in an application specific integrated circuit (ASIC) design, which allows for the circuit to be included in current designs with additional circuitry or embedded into new designs. This device and technique moves computational, sensing, and actuation abilities closer to the source, and allows for the networking of multiple similar nodes to each other and to a central processor. This technique also allows for reconfigurable systems by adding or removing nodes at any time. It can do so using nothing more than the in situ power wiring of the system.

  11. A Generally Applicable Computer Algorithm Based on the Group Additivity Method for the Calculation of Seven Molecular Descriptors: Heat of Combustion, LogPO/W, LogS, Refractivity, Polarizability, Toxicity and LogBB of Organic Compounds; Scope and Limits of Applicability.

    PubMed

    Naef, Rudolf

    2015-10-07

    A generally applicable computer algorithm for the calculation of the seven molecular descriptors heat of combustion, logPoctanol/water, logS (water solubility), molar refractivity, molecular polarizability, aqueous toxicity (protozoan growth inhibition) and logBB (log (cblood/cbrain)) is presented. The method, an extendable form of the group-additivity method, is based on the complete break-down of the molecules into their constituting atoms and their immediate neighbourhood. The contribution of the resulting atom groups to the descriptor values is calculated using the Gauss-Seidel fitting method, based on experimental data gathered from literature. The plausibility of the method was tested for each descriptor by means of a k-fold cross-validation procedure demonstrating good to excellent predictive power for the former six descriptors and low reliability of logBB predictions. The goodness of fit (Q²) and the standard deviation of the 10-fold cross-validation calculation was >0.9999 and 25.2 kJ/mol, respectively, (based on N = 1965 test compounds) for the heat of combustion, 0.9451 and 0.51 (N = 2640) for logP, 0.8838 and 0.74 (N = 1419) for logS, 0.9987 and 0.74 (N = 4045) for the molar refractivity, 0.9897 and 0.77 (N = 308) for the molecular polarizability, 0.8404 and 0.42 (N = 810) for the toxicity and 0.4709 and 0.53 (N = 383) for logBB. The latter descriptor revealing a very low Q² for the test molecules (R² was 0.7068 and standard deviation 0.38 for N = 413 training molecules) is included as an example to show the limits of the group-additivity method. An eighth molecular descriptor, the heat of formation, was indirectly calculated from the heat of combustion data and correlated with published experimental heat of formation data with a correlation coefficient R² of 0.9974 (N = 2031).

  12. Additive Manufacturing Integrated Energy Demonstration

    SciTech Connect

    Jackson, Roderick; Lee, Brian; Love, Lonnie; Mabe, Gavin; Keller, Martin; Curran, Scott; Chinthavali, Madhu; Green, Johney; Sawyer, Karma; Enquist, Phil

    2016-02-05

    Meet AMIE - the Additive Manufacturing Integrated Energy demonstration project. Led by Oak Ridge National Laboratory and many industry partners, the AMIE project changes the way we think about generating, storing, and using electrical power. AMIE uses an integrated energy system that shares energy between a building and a vehicle. And, utilizing advanced manufacturing and rapid innovation, it only took one year from concept to launch.

  13. Additive Manufacturing Integrated Energy Demonstration

    ScienceCinema

    Jackson, Roderick; Lee, Brian; Love, Lonnie; Mabe, Gavin; Keller, Martin; Curran, Scott; Chinthavali, Madhu; Green, Johney; Sawyer, Karma; Enquist, Phil

    2016-07-12

    Meet AMIE - the Additive Manufacturing Integrated Energy demonstration project. Led by Oak Ridge National Laboratory and many industry partners, the AMIE project changes the way we think about generating, storing, and using electrical power. AMIE uses an integrated energy system that shares energy between a building and a vehicle. And, utilizing advanced manufacturing and rapid innovation, it only took one year from concept to launch.

  14. Computer viruses

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  15. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  16. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    SciTech Connect

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  17. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  18. Fundamentals of the Control of Gas-Turbine Power Plants for Aircraft. Part 1; Standardization of the Computations Relating to the Control of Gas-Turbine Power Plants for Aircraft by the Employment of the Laws of Similarity

    NASA Technical Reports Server (NTRS)

    Luehl, H.

    1947-01-01

    It will be shown that by the use of the concept of similarity a simple representation of the characteristic curves of a compressor operating in combination with a turbine may be obtained with correct allowance for the effect of temperature. Furthermore, it becmes possible to simplify considerably the rather tedious investigations of the behavior of gas-turbine power plants under different operating conditions. Characteristic values will be derived for the most important elements of operating behavior of the power plant, which will be independent of the absolute valu:s of pressure and temperature. At the same time, the investigations provide the basis for scale-model tests on compressors and turbines.

  19. Influence of ultrasound power on acoustic streaming and micro-bubbles formations in a low frequency sono-reactor: mathematical and 3D computational simulation.

    PubMed

    Sajjadi, Baharak; Raman, Abdul Aziz Abdul; Ibrahim, Shaliza

    2015-05-01

    This paper aims at investigating the influence of ultrasound power amplitude on liquid behaviour in a low-frequency (24 kHz) sono-reactor. Three types of analysis were employed: (i) mechanical analysis of micro-bubbles formation and their activities/characteristics using mathematical modelling. (ii) Numerical analysis of acoustic streaming, fluid flow pattern, volume fraction of micro-bubbles and turbulence using 3D CFD simulation. (iii) Practical analysis of fluid flow pattern and acoustic streaming under ultrasound irradiation using Particle Image Velocimetry (PIV). In mathematical modelling, a lone micro bubble generated under power ultrasound irradiation was mechanistically analysed. Its characteristics were illustrated as a function of bubble radius, internal temperature and pressure (hot spot conditions) and oscillation (pulsation) velocity. The results showed that ultrasound power significantly affected the conditions of hotspots and bubbles oscillation velocity. From the CFD results, it was observed that the total volume of the micro-bubbles increased by about 4.95% with each 100 W-increase in power amplitude. Furthermore, velocity of acoustic streaming increased from 29 to 119 cm/s as power increased, which was in good agreement with the PIV analysis.

  20. Cleavage of ether, ester, and tosylate C(sp3)-O bonds by an iridium complex, initiated by oxidative addition of C-H bonds. Experimental and computational studies.

    PubMed

    Kundu, Sabuj; Choi, Jongwook; Wang, David Y; Choliy, Yuriy; Emge, Thomas J; Krogh-Jespersen, Karsten; Goldman, Alan S

    2013-04-03

    A pincer-ligated iridium complex, (PCP)Ir (PCP = κ(3)-C6H3-2,6-[CH2P(t-Bu)2]2), is found to undergo oxidative addition of C(sp(3))-O bonds of methyl esters (CH3-O2CR'), methyl tosylate (CH3-OTs), and certain electron-poor methyl aryl ethers (CH3-OAr). DFT calculations and mechanistic studies indicate that the reactions proceed via oxidative addition of C-H bonds followed by oxygenate migration, rather than by direct C-O addition. Thus, methyl aryl ethers react via addition of the methoxy C-H bond, followed by α-aryloxide migration to give cis-(PCP)Ir(H)(CH2)(OAr), followed by iridium-to-methylidene hydride migration to give (PCP)Ir(CH3)(OAr). Methyl acetate undergoes C-H bond addition at the carbomethoxy group to give (PCP)Ir(H)[κ(2)-CH2OC(O)Me] which then affords (PCP-CH2)Ir(H)(κ(2)-O2CMe) (6-Me) in which the methoxy C-O bond has been cleaved, and the methylene derived from the methoxy group has migrated into the PCP Cipso-Ir bond. Thermolysis of 6-Me ultimately gives (PCP)Ir(CH3)(κ(2)-O2CR), the net product of methoxy group C-O oxidative addition. Reaction of (PCP)Ir with species of the type ROAr, RO2CMe or ROTs, where R possesses β-C-H bonds (e.g., R = ethyl or isopropyl), results in formation of (PCP)Ir(H)(OAr), (PCP)Ir(H)(O2CMe), or (PCP)Ir(H)(OTs), respectively, along with the corresponding olefin or (PCP)Ir(olefin) complex. Like the C-O bond oxidative additions, these reactions also proceed via initial activation of a C-H bond; in this case, C-H addition at the β-position is followed by β-migration of the aryloxide, carboxylate, or tosylate group. Calculations indicate that the β-migration of the carboxylate group proceeds via an unusual six-membered cyclic transition state in which the alkoxy C-O bond is cleaved with no direct participation by the iridium center.